Pages

Sunday, 7 September 2014

API Integration with Zapier (Gmail to Salesforce)

Recently I attended a training session with +General Assembly  in London titled, What and Why of APIs. It was a training session focusing on usage of APIs and it was not technical at all. I find these type of training sessions very useful as they describe concepts and controlling ideas behind technologies rather than the hands-on, involved implementation details.

What grabbed my attention from the many different and very useful public and private API tools, 'thingies', introduced in this training session was Zapier. - www.zapier.com

Zapier looked to me as a platform for integrating APIs with clicks rather than code, with declarative programming. Is a way of automating the internet. What you get when you sign up with them is the ability to use 'Zaps', or create your own zaps. Zaps are integration of endpoints, like connecting Foursquare to Facebook or Gmail to Salesforce and syncing them. One of the Zaps available does that, connects your Gmail emails to Salesforce using the Gmail and Salesforce APIs and lets you sync between them. Not only that, but Zapier Zaps also put triggers on the endpoints which allow you to sync only when certain conditions are true. For example the Gmail to Salesforce Zap can push your email into a Salesforce Lead only when an email with a certain subject arrives to your gmail inbox. This is what a Zapier platform looks like:


An individual Zap looks like this and is nothing more than a mapping of the Endpoints with some trigger actions and filters.


The environment is self-documenting and very easy to use. All you do is drag and drop gmail fields and match them with the Lead, or other custom object Salesforce fields. Then you configure the sync to happen only under certain conditions/filters. Really easy to set-up. The free version runs the sync every 5 hours, well good enough for me. The paid version runs the sync every 5 minutes. 

There is even capability to track historical runs and trigger a manual run via the Zap menu. See below the 'Run' command to run a Zap whenever you like. 


In my case I used the tool to create a Zap to do exactly what I just described. My Zap creates a Salesforce Lead automatically in my Salesforce org whenever a 'special' email is sent to me. Great automation!

This is a taste of the 'platform cloud' tools out there to do API to API and App to App integrations with clicks and not code. With tools like Zapier all you really need is, imagination!

More links:

Monday, 14 July 2014

MySQL on-premise to Amazon RDS migration tips

Things to watch and do when migrating MySQL databases from ‘on-premise’ to Amazon AWS RDS
  1. Not all versions of databases can be migrated to RDS. Especially if you want to do a 0 downtime migration. Make sure you know which versions are possible, at this writing Amazon announced that it will support any old version of MySQL 5.1 and above. 
  2. In a zero downtime migration to Amazon RDS you work with mysqldump or mydumper to import the baseline data and and then you use MySQL Replication and the binary_log position to apply the additional records created during the import, the delta. That is it is possible to create a MySQL slave in the Amazon AWS Clouds! 
  3. So when you have confirmed the on-premise MySQL that you have is compatible you can then use mysqldump with the --master-data parameter to export your data including the binlog position coordinates at the time of the export. You can use mydumper if yor database is big to do this with parallel streams. You will use the coordinates and MySQL replication to catch-up with the on-premise master database when creating the MySQL slave in RDS. 
  4. Use different database parameters for different databases. 
  5. As you load the RDS database using myloader or mysql the operation might take long time depending on the size of your database. If this is the case, disable backups, it stops logging, try using one of the better spec RDS Instance classes and IOPS for the duration of the operation. You can always downsize the RDS instance after you have completed the initial load. 
  6. After you have completed the initial load, use Multi AZ which is a synchronous standby (in Oracle parlour) and schedule the backups immediately before you open your applications to the database, as initial backup requires a reboot. 
  7. Beware there is no SSH access to RDS, that means you have no access to the file system. 
  8. Get the DB Secuirty groups right and make sure your applications can access the RDS instances

Monday, 5 May 2014

My MySQL database impressions

I have been in the data and database world for over a decade now, working with mainly Oracle and data integration projects with Salesforce.

I have also spent time working with MySQL a relational database with open source roots, now part of Oracle. In this post I want to talk about my impressions of MySQL and some tips and tricks I learned working with it.

First and for all, you will have to be ready to get your hands dirty. I couldn't find a package structure for database administration in MySQL - like DBMS libraries of packages and procedures in Oracle. That means you will have to do most of the things on your own. Nevertheless good news is he database starts showing an Oracle banner when you login from version 5.0 onwards and some features like on-line Schema changes, more cost based optimisation and partitioning are added in versions 5.6 - a sign of good things to come.

Some key points

  • Data Import/Export - You can use the native mysqldump utility to dump data with parameters, but it is slow. You can dump schemas and data. I couldn't get it to dump data fast (in parallel) though that is why I strongly recommend mydumper an open source utility written by guys in Oracle and MySQL to dump data using parallel threads and is very fast. Import can be done in parallel as well and it can give you that boost provided your hardware permits it. Don't try to disable constraints, drop indexes before imports as you will read in posts and suggestions on the net, mysqldump already does that for you.
  • Hot Backup - mylvmbackup seems like the de-facto script to take hot backups when the database in online. There are tools like XtraBackup from Percona too. It takes a snapshot of the disk where your datafiles and logfiles are. At restore it does a crash recovery using the logs and brings the database transactions forwards to the point of crash. Then if you have the logs after that, you can play them forwards and bring the database to a point in time after the backup. 
  • Parallel processing - Nada, there is none! I couldn't get it do anything in parallel. The only thing I managed to do in parallel was to export and import data with mydumper, that works! So if you have many CPUs you will be watching them being idle most of the time as one thread only will be chugging away. Unless you use mydumper for your import/export operations where you can make those CPUs sweat. 
  • DBMS packages - You fancy automating, do you need scripts to do repetitive tasks? Well there is no DBMS package library to help you administer the database in MySQL. Instead, you can use Percona Toolkit scripts, a consultancy specialising in helping MySQL DBAs to do great work with MySQL databases. They have a variety of scripts from comparing (diff), syncing databases, tables to extracting metadata and GRANTS structures.  
  • Hints, Explain Plan, Performance Tuning. I couldn't see much of Cost Based Optimisation in MySQL, the data dictionary (INFORMATION_SCHEMA) has metadata names but doesn't hold any dynamic statistics about objects, estimates of counts of rows in tables and indexes it holds can be up 50% wrong. The whole thing is based on heuristics, I suppose. The EXPLAIN PLAN is just a row where it says what the optimiser will do, there is no cost analysis or logical tree structure of execution plans yet.  I couldn't see much on Join orders either, no Nested Loops, HASH or MERGE joins yet. 

MySQL is a popular, relational database. The free version of this database is probably what a small website and a start-up needs. But having said that, many sites outgrow MySQL and still stay with it.

Oracle will probably turn it to a serious database too. Adding partitioning, multi threading to it in the recent releases, is a step forwards in becoming an Enterprise size and scale database.  I don't know much about the MySQL Cluster Version and MySQL Replication I know takes a load off from the reads. I want to see it doing more Performance Tuning science.

Top tools with MySQL that I used

MySQL Workbench - SQL IDE.
Mydumper - Fast logical backup and restore.
Mylvmbackup - Hot backup script
Pentaho Kettle - PDI is an all round data integration and middle-ware tool