Today I came across something very interesting:
I had a query with a inner join between 2 tables and few more criteria in the where condition along with one condition like "int1
Both the tables had about 1 million records each. A simple select on both tables individually as well as joined returned all the records within 20 seconds. However when full query was executed on sql server, it continued to run for 3 hours, taking 100% CPU cycle and no results. I was wondering, why filtering should take that long.
I started adding conditions to where clause one after another and ran the query to see which where clause is taking more time. Everything worked fine. At the end I added the condition "int1
But a nagging feeling was there that something is not alright. Why should addition of an extra check improve performance. Am I missing something?
Next started the Google research "Does ISNULL check improve query performance". I landed on a link with exact same question.
The answer startled me :).
when I added the ISNULL condition, it did not use the existing execution plan. it created a new one for itself based on latest SQL server statistics. Hence it completed fast. Execution plan for my query was based on old statistics and hence it was slow.
I update the statistics of my database by executing following:
EXEC sp_updatestats
This forced all the execution plan to get recreated again. A after this my original query ran as fast as other queries.
Conclusion: ISNULL check does not improve performance.