Saturday, 17 October 2020

Oracle Apex 20.2 REST API data sources and database tables

Oracle Apex 20.2 is out and has a very interesting new feature, REST Data Source Synchronisation


Why is the REST Data Source Synchronization feature interesting? 


Oracle Apex REST Data Source Synchronisation is exciting because it lets you query REST endpoints on the internet on a schedule or on-demand basis and saves the results automatically in database tables.


I think this feature will suit slow-changing data accessible with REST APIs very well. That is, if a REST endpoint data is known to be changing, say few times a day, why should we call the REST endpoint via HTTP every time we wanted to display data on an Apex page? Why would one want to render a page with data over HTTP if that data changes only once a day? Why should we cause network traffic and keep machines busy for data which is not changing often? Or maybe by requirement, you only needed to query a REST endpoint once a day and store it somewhere for data-warehousing.


Wouldn't it be better to store the data in a database table and render it from there every time a page is viewed?


This is exactly what the REST Data Source Synchronisation does. It queries the REST API endpoint and saves the JSON response as data in a database table on a schedule of your choice or on demand


For my experiment, I used the Public Free London TfL REST API Endpoint from the TfL API which holds data for TfL transportation disruptions and I configured this endpoint to synchronise with my database table every day at 5am.





I even created the Oracle Apex REST Data source inside the apex.oracle.com platform. I used the TfL API Dev platform provided key to make the call from there to the TfL REST endpoint and I managed to sync it once a day on an Oracle Apex Faceted Search page and some charts.


I was able to do all this with zero coding, just pointing the Oracle Apex REST Data Source I created for the TfL API to a table and scheduling the sync to happen once a day at 5am. 


To see the working app, go to this link: https://apex.oracle.com/pls/apex/databasesystems/r/tfl-dashboard/home


Screenshots of the app below






Wednesday, 13 November 2019

Oracle Apex Social Sign in

In this post I want to show you how I used the Oracle Apex Social Sign in feature for my Oracle Apex app. Try it by visiting my web app beachmap.info.




Oracle Apex Social Sign in gives you the ability to use oAuth2 to authenticate and sign in users to your Oracle Apex apps using social media like Google, Facebook and others.

Google and Facebook are the prominent authentication methods currently available, others will probably follow. Social sign in is easy to use, you don't need to code, all you have to do is to obtain project credentials from say Google and then pass them to the Oracle Apex framework and put the sign in button to the page which will require authentication and the flow will kick in. I would say at most a 3 step operation. Step by step instructions are available in the blog posts below.


Further reading:




Saturday, 23 February 2019

Chasm Trap problem in Data Warehouses

Chasm trap  in data warehouses occurs when two fact tables relate into one dimension table. This is a data modelling problem which will cause double-counting and bad data when these tables are joined. SQL and relational databases surprise me every day!

Star schemata in data warehouses usually have one fact table and many dimension tables where the fact table joins to it's dimensions tables via foreign keys. This is OK.

But what if you wanted to 'share' the dimension across two or more fact tables, use it commonly, slowly starting to create an intertwined galaxy maybe! These multi-fact schemas are also called Fact-Constellations.

For example think of a CUSTOMERS dimension table with the details of the customers and two fact tables SALES and REFUNDS with order and refund transactions as in the data model below.





Preparing sample data to reproduce the Chasm Trap

CUSTOMERS



SALES


REFUNDS




BANG! Chasm Trap spotted in the join of the SALES, REFUNDS and CUSTOMERS tables





What happened? Do you see the duplicates? We joined by the n-1 principle. That is 3 tables 2 join statements on the keys. It seems is impossible to query these two fact tables via their common dimension. So why can't we know how many SALES and REFUNDS a customer did? Strange, why is the query falling into a Cartesian Product when you want to query from two different tables via a common dimension table? Hint: Think of data modeling. 

But before let's try to sort out the chasm trap.


A quick SQL solution to 'Chasm Trap' is a pre-aggregated join of the tables like this:


Or an alternative solution can be to study again the data model and look at superfluous or wrongly assumed data relationships and dependencies. Maybe the dimension table is not meant to be shared. Review of the data model and data warehouse design might be necessary.

Conclusion

If you are going to use one dimension on two fact tables in your Data Warehouse, make sure you don't fall into the Chasm Trap. Or your data will be incorrect and untrustworthy. You must first pre-aggregate and then join to get correct results or look to better your understanding of the model and the relationships. For example maybe the REFUNDS entity is not directly related to CUSTOMERS but it is to SALES?

Once a gain we see how important data modeling and the understanding of the problem the data is trying to solve is.


Further reading:




Monday, 12 November 2018

Chart your SQL direct with Apache Zeppelin Notebook

Do you want a notebook where you can write some SQL queries to a database and see the results either as a chart or table?

As long as you can connect to the database with a username and have the JDBC driver, no need to transfer data into spreadsheets for analysis, just download (or docker) and use Apache Zeppelin notebook and chart your SQL directly!




I was impressed by the ability of Apache Zeppelin notebook to chart SQL queries directly. To configure this open source tool and start charting your SQL queries just point it your database JDBC endpoint and then start writing some SQL in real time.

See below how simple this is, just provide your database credentials and you are ready to go.





The notebook besides JDBC to any database, in my case I used a hosted Oracle cloud account, can also handle interpreters like: angular, Cassandra, neo4j, Python, SAP and many others.

You can download Apache Zeppelin and configure on localhost or you can run it on docker like this

docker run -d -p 8080:8080 -p 8443:8443 -v $PWD/logs:/logs -v $PWD/notebook:/notebook xemuliam/zeppelin

Thursday, 10 May 2018

How much data do you have?

Sometimes you need to ask this most simple question about your database to figure out what the real size of your data is.

Databases store loads of auxiliary data such as indexes, aggregate tables, materialized views and other structures where the original data is repeated. Many times databases repeat the data in these structures for the sake of achieving better performance gains for the applications and reports they serve. The duplicate storage of data, in this case, is legitimate. It is there for a reason.

But should this repetition be measured and included in the database 'data' size?

Probably yes. After all, it is data, right?

To make things worse, many databases due to many updates and deletes, over time create white space in their storage layer. This white space is unused fragmented free space which can not be re-used by new data entries. This is bad. Often it will end up being scanned in full table scan operations unnecessarily, eating up your computing resources. But the most unfortunate fact is that it will appear as if it is data in your database size measurements when it is not!

It is just unused white space, nothing but costly void. Very bad.

There are mechanisms in databases which, when enabled, will automatically remedy the white space and reset and re-organise the storage of data in the background and save you space, time and money. Here is a link which talks about such mechanisms at length https://oracle-base.com/articles/misc/reclaiming-unused-space

One should be diligent when measuring database sizes, be suspicious. There is loads of data which is repeated and some of it is just the blank void due to fragmentation and unused white-space. You will be surprised to see how much database white space exists in your database if you do not reclaim it back during maintenance. If you are curious to find out, there are ways you can measure the whitespace and the real data.

So, how do we measure?
Below is a database size measuring SQL script which can be used with Oracle to show data (excluding the indexes) in tables and partitions. It also tries to estimate real storage (in the actual_gb column) excluding the whitespace by multiplying the number of rows in a table with the average row size. Replace the '<YOURSCHEMA>' in the code with the schema you wish to measure. Provided you have statistics calculated before you measure.

SELECT SUM(actual_gb)  AS actual, 
       SUM(segment_gb) AS allocated 
FROM   ( 
                SELECT   s.owner, 
                         t.table_name, 
                         s.segment_name, 
                         s.segment_type, 
                         t.num_rows, 
                         t.avg_row_len, 
                         t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 actual_gb, 
                         SUM(s.bytes)  / 1024 / 1024 / 1024              segment_gb 
                FROM     dba_segments s, 
                         dba_tables t wheres.owner = '<YOURSCHEMA>' 
                AND      t.table_name = s.segment_nameand segment_type IN ('TABLE', 
                                                                           'TABLE PARTITION', 
                                                                           'TABLE SUBPARTITION') 
                GROUP BY s.owner, 
                         t.table_name, 
                         s.segment_name, 
                         s.segment_type, 
                         t.num_rows, 
                         t.avg_row_len, 
                         t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 );
ACTUAL     ALLOCATED
---------- ----------
18.9987    67.3823