Pages

Monday, 12 November 2018

Chart your SQL direct with Apache Zeppelin Notebook

Do you want a notebook where you can write some SQL queries to a database and see the results either as a chart or table?

As long as you can connect to the database with a username and have the JDBC driver, no need to transfer data into spreadsheets for analysis, just download (or docker) and use Apache Zeppelin notebook and chart your SQL directly!




I was impressed by the ability of Apache Zeppelin notebook to chart SQL queries directly. To configure this open source tool and start charting your SQL queries just point it your database JDBC endpoint and then start writing some SQL in real time.

See below how simple this is, just provide your database credentials and you are ready to go.





The notebook besides JDBC to any database, in my case I used a hosted Oracle cloud account, can also handle interpreters like: angular, Cassandra, neo4j, Python, SAP and many others.

You can download Apache Zeppelin and configure on localhost or you can run it on docker like this

docker run -d -p 8080:8080 -p 8443:8443 -v $PWD/logs:/logs -v $PWD/notebook:/notebook xemuliam/zeppelin

Thursday, 10 May 2018

How much data do you have?

Sometimes you need to ask this most simple question about your database to figure out what the real size of your data is.

Databases store loads of auxiliary data such as indexes, aggregate tables, materialized views and other structures where the original data is repeated. Many times databases repeat the data in these structures for the sake of achieving better performance gains for the applications and reports they serve. The duplicate storage of data, in this case, is legitimate. It is there for a reason.

But should this repetition be measured and included in the database 'data' size?

Probably yes. After all, it is data, right?

To make things worse, many databases due to many updates and deletes, over time create white space in their storage layer. This white space is unused fragmented free space which can not be re-used by new data entries. This is bad. Often it will end up being scanned in full table scan operations unnecessarily, eating up your computing resources. But the most unfortunate fact is that it will appear as if it is data in your database size measurements when it is not!

It is just unused white space, nothing but costly void. Very bad.

There are mechanisms in databases which, when enabled, will automatically remedy the white space and reset and re-organise the storage of data in the background and save you space, time and money. Here is a link which talks about such mechanisms at length https://oracle-base.com/articles/misc/reclaiming-unused-space

One should be diligent when measuring database sizes, be suspicious. There is loads of data which is repeated and some of it is just the blank void due to fragmentation and unused white-space. You will be surprised to see how much database white space exists in your database if you do not reclaim it back during maintenance. If you are curious to find out, there are ways you can measure the whitespace and the real data.

So, how do we measure?
Below is a database size measuring SQL script which can be used with Oracle to show data (excluding the indexes) in tables and partitions. It also tries to estimate real storage (in the actual_gb column) excluding the whitespace by multiplying the number of rows in a table with the average row size. Replace the '<YOURSCHEMA>' in the code with the schema you wish to measure. Provided you have statistics calculated before you measure.

SELECT SUM(actual_gb)  AS actual, 
       SUM(segment_gb) AS allocated 
FROM   ( 
                SELECT   s.owner, 
                         t.table_name, 
                         s.segment_name, 
                         s.segment_type, 
                         t.num_rows, 
                         t.avg_row_len, 
                         t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 actual_gb, 
                         SUM(s.bytes)  / 1024 / 1024 / 1024              segment_gb 
                FROM     dba_segments s, 
                         dba_tables t wheres.owner = '<YOURSCHEMA>' 
                AND      t.table_name = s.segment_nameand segment_type IN ('TABLE', 
                                                                           'TABLE PARTITION', 
                                                                           'TABLE SUBPARTITION') 
                GROUP BY s.owner, 
                         t.table_name, 
                         s.segment_name, 
                         s.segment_type, 
                         t.num_rows, 
                         t.avg_row_len, 
                         t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 );
ACTUAL     ALLOCATED
---------- ----------
18.9987    67.3823

Saturday, 3 March 2018

SQL with Apache Spark, easy!

Reading about cluster computing developments like Apache Spark and SQL I decided to find out.

What I was after was to see how easy is to write SQL in Spark-SQL. In this micro-post I will show you how easy is to SQL a JSON file.

For my experiment I will use my chrome_history.json file which you can download from your chrome browser using the extension www.JSON-XLS.com. To run the SQL query on PySpark on my laptop I will use the PyCharm IDE. After little bit of configuration on PyCharm, setting up environments (SPARK_HOME), there it is: It only takes 3 lines to be able SQL query a JSON document in Spark-SQL.



(click image to enlarge)


Think of the possibilities with SQL, the 'cluster' partitioning and parallelisation you can achieve


Links:

Apache Spark: https://spark.apache.org/downloads.html

PyCharm: https://www.jetbrains.com/pycharm/