The presentation was divided into two parts: DWH and OLTP. One of the main massages was to “unlearn” old practices that are not valid anymore as technology/product changed and to get rid of old myths. All demonstrations were done on an Exadata machine in the US. Unfortunately the hotel’s WLAN had some issues in the morning causing “connection timeouts”.
The DWH part consisted of big data techniques like partitioning, compression, object statistics, parallelisation, resource manager. There is always a cost (I mean processing cost and not just licensing costs) if partitioning or compression is used. Data loads may be slower but reading larger data sets or using partition pruning will pay back. New DWH databases should follow the following strategy:
- Create tables
- Optionally Run (or explain) queries on empty tables (the optimizer will sample e.g. column usage. This data will improve statistics gathering later)
- Enable incremental statistics (for large partitioned tables)
- Load data
- Gather statistics
- Create indexes (creating indexes will automatically trigger statistics collection for the index. Therefore index creation is scheduled after statistics gathering)
Developers must ensure to use set based processing instead of row by row processing (“slow by slow”).
OLTP session started with demonstration of overhead caused by too many connections coming from connection pools. Configuring hundreds or even thousands of connections is a bad idea as a few processing cores will not be able to handle the requests. Unnecessary overhead can be also caused by login/logoff storms, hardparses or softparses. Better to do no parsing at all by
- statement caching,
- using PL/SQL,
- reusing cursors.
SQL statements still have the most impact on performance. The presenters discussed SQL mistakes they see happen again and again. They suggested five basic checks:
- Check for parse errors (check “parse count (failures)” statistic)
- Validate correct join conditions specified
- Caution with implicit data type conversions
- Caution with wildcards and functions
- Caution on fuzzy joins
Finally application instrumentation was the last topic. The packages dbms_application_info and dbms_session should be used to properly instrument the code and also to make tracing much easier for middletier applications.
The three presenters did a great job in making the event memorable. If the event comes to your area, it’s a great chance to attend. There are also videos available on youtube.com (search for “Andrew Holdsworth real world performance”) to get a first glimpse of what to expect.