• Arun Kumar

Top resource intensive SQL

Database performance is a major concern for a DBA. SQLs are the ones which needs proper DB management in order to execute well. At times the application team might tell you that the database is running slow. You can run below query to get the top 5 resource intensive SQL with SQL ID and then give it to application team to optimize them.

Find top 5 resource intensive SQL

col Rank for a4
(PARTITION BY "Snap Day" ORDER BY "Buffer Gets" + "Disk Reads" DESC) AS "Rank", i1.*
FROM (SELECT TO_CHAR (hs.begin_interval_time, 'MM/DD/YY' ) "Snap Day",
SUM (shs.executions_delta) "Execs",
SUM (shs.buffer_gets_delta) "Buffer Gets",
SUM (shs.disk_reads_delta) "Disk Reads",
ROUND ( (SUM (shs.buffer_gets_delta)) / SUM (shs.executions_delta), 1 ) "Gets/Exec",
ROUND ( (SUM (shs.cpu_time_delta) / 1000000) / SUM (shs.executions_delta), 1 ) "CPU/Exec(S)",
ROUND ( (SUM (shs.iowait_delta) / 1000000) / SUM (shs.executions_delta), 1 ) "IO/Exec(S)",
shs.sql_id "Sql id",
REPLACE (CAST (DBMS_LOB.SUBSTR (sht.sql_text, 50) AS VARCHAR (50) ), CHR (10), '' ) "Sql"
FROM dba_hist_sqlstat shs INNER JOIN dba_hist_sqltext sht
ON (sht.sql_id = shs.sql_id)
INNER JOIN dba_hist_snapshot hs
ON (shs.snap_id = hs.snap_id)
HAVING SUM (shs.executions_delta) > 0
GROUP BY shs.sql_id, TO_CHAR (hs.begin_interval_time, 'MM/DD/YY'), 
CAST (DBMS_LOB.SUBSTR (sht.sql_text, 50) AS VARCHAR (50) )
ORDER BY "Snap Day" DESC) i1
WHERE "Rank" <= 5 AND "Snap Day" = TO_CHAR (SYSDATE, 'MM/DD/YY');


Recent Posts

See All

Temp Tablespace utilization

It is very common that as a DBA you will encounter ‘ORA-1652: unable to extend temp segment’. Handling temporary tablespace is different than permanent tablespaces inside Oracle. On a high level, temp

Estimate flashback destination size

Sometimes application team will ask DBA to enable flashback for x number of days. In such case, a DBA needs to estimate the flashback space required for x number of days in order to store the flashbac

Datafile utilization check

When you want to shrink a datafile, you must always check the single datafile utilization. In case if you shrink datafile more than the used size, it will fail. Below query gives the datafile utilizat