About Me

My photo
I am currently working as a DB Consultant with Ashnik. We are into Open Source and Cloud based enterprise class solutions. Prior to this I was working as a DBA @ Misys India (Banking Services). Working on IBM DB2, Oracle Microsoft SQL Server. Selected as IBM Champion for Data Management 2011. I like bit of photography, cycling, poetry and I just love gadgets!!
Showing posts with label Postgres. Show all posts
Showing posts with label Postgres. Show all posts

Friday, December 12, 2014

Easing the Pain of Migrating from SQL Server to PostgreSQL


Migrating from any kind of software running on a product from one vendor to a product from another vendor is never easy or risk-free. You always have to go through proper testing cycles and at times there might be changes required in application working on top of product (OS, Database, App server, Web Server etc) being migrated.

Same applies to relational databases as well. Though fundamentally all the relational database are supposed to adhere to same relational theory, still they have some specific implementations of commands and functions. This leads to application changes and with those changes, comes a need of additional efforts for re-testing. The pain is quite much relieved by the fact that relational databases tries to adhere to ANSI-SQL syntax. But they all have few exceptions while they try to comply with ANSI standards.

In my day job as DB Solution Architect, I come across one or two new migration cases almost every month. And at times these are Migration from SQL Server to PostgreSQL. In my experience and opinion SQL Server is one implementation of Relational Theory which has most deviations from ANSI syntax. For example- all RDBMS use double pipe [ || ] as concatenation operator whereas SQL Server (for some good or bad reason) chose it to be +.

Another feature which I have seen being a pain while migrating is 'case-insensitive searches'. All databases support that feature if you would use UPPER( ) function on string before comparing them. But that is the default and only behavior in SQL Server database, if you have created with a collation ending with CI_AS or CI_AI (meaning case insensitive and ascent sensitive/insensitive). I have faced that challenge of migrating these customers.

When you look at the database while assessing it for migration it all looks fine. The SQL Server database has only tables (no programmable objects e.g. procedures or functions) with no 'funny' data-types [again the whole world calls timestamp as data-time and SQL Server uses it as a binary datatype for Optimistic Locking :) ]. You migrate it over (well quite easily), and finally when you connect the application you start getting error left-right-and-center!

Well PostgreSQL is not just a stable alternative for proprietary databases but is also offers great flexibility to make itself a popular alternative. Let me share two examples where-in PostgreSQL can really make it easy for you to migrate over.
In PostgreSQL you can create user defined casts and user defined operators. So to over come the issues I have mentioned above, let me create user defined operators and test-


1. + as Concatenation operator-

Let's first create a function which can concatenate two strings
___________________________________

CREATE FUNCTION 
           concat_with_plus(varchar, varchar) 
          RETURNS varchar
    AS $$ SELECT $1 || $2 $$
    LANGUAGE SQL;
___________________________________

Now let's create a custom operator which has two arguments ( left=string and right=string )

___________________________________


CREATE OPERATOR + (
    LEFTARG = varchar,
    RIGHTARG = varchar ,
    PROCEDURE = concat_with_plus
);
___________________________________

If you notice the operator uses the function that we have defined above.

Here is how it would work-

___________________________________

postgres=# select 'sameer' + 'kumar';
  ?column?
-------------
 sameerkumar
(1 row)
___________________________________

2. Enable Case-insensitive search for all the tables and columns

All we need to do is create an operator '=' overriding the default behavior.
Before that let's first define a function which matches two string irrespective of their case.

______________________________________

CREATE FUNCTION 
varchar_equal_caseinsenstive(varchar, varchar) 
    RETURNS boolean
    AS $$ SELECT upper($1)=upper($2) $$
    LANGUAGE SQL;
______________________________________


Let's now create an operator '=' using this functions

______________________________________

CREATE OPERATOR = (
    LEFTARG = varchar,
    RIGHTARG = varchar,
    PROCEDURE = varchar_equal_caseinsenstive,
    COMMUTATOR = =,
    NEGATOR = !=,
    HASHES, MERGES
);
______________________________________

Here is how it would work-

______________________________________

postgres=# SELECT * FROM test_varchar 
postgres-# WHERE col1='SAMEER';
  col1
--------
 sameer
 Sameer
 SameeR
 SAmeeR
 Sameer
<<more rows in result>>
______________________________________

And if you define an index over UPPER(COL1), 

______________________________________

CREATE INDEX upper_col_test_varchar ON 
      test_varchar(UPPER(col1));
______________________________________


it would be used for index scans/bitmap scans-
____________________________________________________________________________

postgres=# EXPLAIN SELECT * FROM test_varchar WHERE col1='SAMEER';
                                       QUERY PLAN                         
----------------------------------------------------------------------------------------
 Bitmap Heap Scan on test_varchar  (cost=11.77..394.81 rows=432 width=10)
   Recheck Cond: (upper((col1)::text) = 'SAMEER'::text)
   ->  Bitmap Index Scan on upper_col_test_varchar  (cost=0.00..11.66 rows=432 width=0)
         Index Cond: (upper((col1)::text) = 'SAMEER'::text)
 Planning time: 2.685 ms
(5 rows)

____________________________________________________________________________


These are one of the most frequently deployed tricks when I migrate my customer from SQL Server to PostgreSQL. These wrappers are written inside your database. You can create them on template0 database and hence all the databases created there after will have these casts. This is not only easy but also does away with the requirement of changing application code. 

Stay tuned and we will be sharing more such tips and tricks which helps you break away from the lock-in of proprietary database vendors and move to a much more cost effective solution.

In case you are looking for professional services for migration you may contact my employers Ashnik (www.ashnik.com | success@ashnik.com). We offer wide range of PostgreSQL services - Migration,  Heterogeneous Replication, Hardening, Tuning, High Availability, DR Setup and anything that has to do with PostgreSQL.

Tuesday, November 18, 2014

Emulate redo Multiplexing in PostgreSQL


When it comes to reliability and recover-ability, PostgreSQL is as good a database as any other RDBMS implementation we see in the market. Postgres very strongly complies with ACID philosophy of  Relational Databases. It has Multi-version concept where the older snapshot of rows/data is preserved until any transaction which may need it lives (something similar to UNDO tablespaces or rollback segment of Oracle). The database recover-ability is ensured by WAL segments which you can relate to redo log files of Oracle or transaction logs of SQL Server. These WAL files are useful for single phase recovery (after an OS or hardware crash). You can enable Archiving of these WAL files and copy them to a separate disk or server and use it for complex recovery. I will talk about this at length in one of my next blogs.

One thing which I have seen Oracle DBAs boast about is multiplexing of redo log files by creating multiple redo log groups. Well that is something which is not there is PostgreSQL out of box (and mainly because it's not really needed given that we have RAID devices and multiplexing/mirroring at OS/storage level). Still if you have a need to multiplex your WAL segments you can do that quite easily v9.2 onwards with pg_receivexlog.

Do pg_receivexlog is a utility which is based on streaming replication and can stream the WAL files in real time. Unlike archive_command it does not require WAL files to be completely filled (16MB) before they can sent over. The files replicated/streamed can be used for archiving or replication.

You can setup a local stream for which you need to-

1. set max_wal_senders to a value 1 or greater. If you have setup replication and using pg_basebackup please count them too.

                     max_wal_senders=2

2. Set the wal_level to archive or hot_standby
                     wal_level=hot_standby

3. Set pg_hba.conf to allow replication connections from local (UNIX Socket)

                      local  replication postgres        trust

4. Restart your DB server after these changes
                     pg_ctl restart -m fast

5. Start pg_receivexlog-
                    pg_receivexlog -D /usr/local/pgsql/wal_2 -U postgres

Make sure that the PGHOST is not set in the session from where you are firing this command.

This command will copy (stream) the WALs from a server locally (on default port pointed by PGPORT) and store them in secondary WAL location - /usr/local/pgsql/wal_2


Now if your server crashes and you lose your WAL directory, all you need to do is look for a file with ".partial" as extension in your secondary WAL location (/usr/local/pgsql/wal_2), rename the file to remove the extension. Then copy the content of secondary WAL location to primary location and restart the server. The last chuck of changes were present in this ".partial" file which otherwise would not have got archived.


Note: There is one caveat though, the files in secondary WAL directory will not get rotated or removed automatically, you may do that on periodic basis with pg_archivecleanup command.


Wednesday, June 11, 2014

Postgres Hangout #2

Watch Ashnik’s Google Hangout event on PostgreSQL right here on 28th May 2014, 4.30 pm SGT! Talking onTips & Tricks on Tuning Slow running SQLs
Are slow running SQLs slackening your application? Have a bad performing query and hand and don’t know what to do? Catch us live and keep learning!



Tips & Tricks on Tuning Slow Running SQLs in PostgreSQL



Watch the prequel over here
Follow @Ashnikbiz on Twitter to get latest updates of next Hangout.

Postgres Hangout #1

Watch Ashnik’s Video Blog showcasing ‘Busting that slow running SQL’

Watch the recording of first ever Postgres Hangout and see how I took viewers through finding the slow running queries. The event is hosted by Ashnik.

Follow @Ashnikbiz on Twitter to get latest updates of next Hangout.



Tuesday, November 12, 2013

Secure your DB the hollywood style



Today I was discussing with a colleague who found it strange that some people use some random names for their database or their database directories.
After pondering over it for a while, I thought its a good thing.

Now if you were a hacker, and had somehow got access to my DB server, you will still have to guess the name of my database. So you will do a  " ps -ef | grep -i postgres "

Or if you are lucky you could just go to the default location or use $PGDATA variable.

Well, these make life of a DBA easy but they make life of a hacker equally easy.

Few elements of security which I have learnt from hollywood flicks are:
  1. Create confusion with duplicates: Remember "Italian Job"
  2. Use unusual names: In war movies do you hear "Tangos" talking to "Charlies" 
  3. Avoid/hide using usual (default) identity: That is why they posed as Movie Crew in "Argo"
  4. Hide/lock the identity/password of superuser and pose a pseudo superuser for everyone to use and interact with: Didn't Jim die before he could control everything and hide his identity in Mission Impossible

Suppose, If I have 3 instances running:

pg_ctl -D /opt/Bali/db
pg_ctl -D /opt/KulaLumpur/db
pg_ctl -D /opt/Bangkok/db

Now you will have to log-in into at least 3 instances, where again I can have multiple databases by name
Bali/db: TamanAyana
Bali/db: Tan
Bali/db: Ulluwatu

KulaLumpur/db: TwinTower
KulaLumpur/db:  CyberJaya


Bangkok/db: WatArun
Bangkok/db: RoyalPalace


This means you have to do more work to get into my system, which means I have more time to stop the intrusion.


Of course if you were smart, you can check memory utilization and number of connections etc. But what if I have set some rules for process priority, I am using same amount of memory for each instance (and smartly allowing the pages of instances not being used to be paged-out). I would be successful to keep you confused for a little longer before you know which database/instance is your true target. Well anyways, Italian Job too was made by figuring out load etc and hence isolating the identity of true target.


Same ways, if possible run your PostgreSQL database on non-default port. Why? Suppose if I have not been able to crack into your DB Server but I managed to figure out it's IP, my first attempt at connecting to PostgreSQL would be aimed at 5432. A determined hacker will eventually be able to hit in the correct spot/port. Sooner or later, your true identity will be out but that will take time and that means you have more time to board that Swiss Airlines flight and safeguard yourself.

Now, some really great feature that PostgreSQL has is hba authentication. It is always good to add a line as below at the end in your hba conf:


host all all 0.0.0.0/0 reject


It will make sure that any connection from unauthorized IP addresses are rejected.

Additionally, the first attempt someone will make at connecting to you DB Server, would be using standard users e.g. enterprisedb, postgres etc. Make sure you set password of these users to something that is completely random and you don't need to remember it. Just create another superuser and always use that user for connecting and doing maintenance work. Make sure that the username is not your first name. I would prefer "wolverine".


Some installers of PostgreSQL might create a .pgpass in your OS user's home directory. It's better to remove them unless you think there is a use-case for it in your environment.


Grant your users only what they need. I will follow this to death. There is no application written which needs DBA, superuser or any similar privilege on your database or instance. Never ever grant these (of course unless your job is at stake) to any user. Grant them only few database privileges which "you think" they are going to need. If they hit an issue or database tells them that they can not do certain operation, grant permission for that operation. If you want a really secure database do this as many times as your users hit a problem.

Never take that shortcut "alter user my_app_user with superuser"


It's good to segregate roles for your users. There should atleast 3 roles, superuser (who is db superuser), application superuser (one who owns and controls application schema changes), application user (one who uses the application). Application reader/auditor etc could be some other roles depending on your environment.

It is a good idea to set a limit on number of connections which can established by these user and restrict their work_mem and few other parameters which you can control at user level.