Oracle8i Parallel Server Concepts and Administration
Release 8.1.5

A67778-01

Library

Product

Contents

Index

Prev Next

12
Application Analysis

This chapter provides a conceptual framework for optimizing Oracle Parallel Server (OPS) application design. It includes the following sections:

How Detailed Must Your Analysis Be?

The level of detail to which you must analyze an application depends on your goals for the use of OPS. If you need OPS to improve overall database throughput, then a detailed analysis of the database design and application workload profile will be necessary. This ensures that the additional CPU power provided by each node of OPS is fully used for application processing. Even if you are using OPS primarily to provide high availability, careful analysis will enable you to predict the required resources.

Experience gained over many benchmarks and real applications shows that for optimal performance, OPS systems must minimize the computing resources used for parallel cache management. This means minimizing the number of instance lock operations. A successful OPS implementation ensures each node performs very few instance lock operations and subsequently the machine-to-machine high speed interconnect traffic is within the design limitations of the cluster.

You cannot successfully minimize the number of PCM lock operations during the final fine tuning phase of the database lifetime. Rather, you must plan this early in the physical database design process.

See Also:

Chapter 13, "Designing Databases for Parallel Server", for a case study showing how to design applications to take advantage of OPS.  

Understanding Your Application Profile

To understand your application profile you must classify tables according to application functions and access patterns. This section describes:

The following comments apply equally to clustered tables or non-clustered tables.

Analyzing Application Functions and Table Access Patterns

Beyond performing the usual application and data analysis phases, a database designer for OPS must anticipate the types of transactions or business functions that may cause excessive lock conversion rates. You must cross reference the core application tables and their access patterns with the application functions.

See Also:

Chapter 13, "Designing Databases for Parallel Server", for worksheets you can use to analyze table access patterns.  

Read-only Tables

With tables that are predominantly read-only, all OPS nodes quickly initialize the PCM locks to shared mode and very little lock activity takes place. Read-only tables and their associated index structures require the allocation of very few PCM locks. With this table type you can expect good performance and scalability with OPS.

Also consider putting tables in read-only tablespaces, using the SQL statement ALTER TABLESPACE READ ONLY. This has several advantages: it speeds up access of the particular tablespace and overall recovery, PCM locks are not required, and you only need to back up a tablespace once after you make it made read-only.

Scalability of parallel query in OPS is subject to the interconnect speed between the nodes. You may need to run high levels of parallelism just to keep the processors busy. It is not unusual to run a degree of parallelism three times the number of nodes (or processors).

These files should have their own PCM lock as specified in the GC_FILES_TO_LOCKS parameter, even if the application is read-only. Large sorts, such as queries using SORT MERGE JOINs, or sorts with GROUP-BYs and ORDER-BYs, can update the data dictionary in the SYSTEM tablespace.

See Also:

"The Four Levels of Scalability" on 2 - 1 and "Setting the Degree of Parallelism" in Oracle8i Tuning.  

Random SELECT and UPDATE Tables

Random SELECT and UPDATE tables, or non-partitioned tables, have transactions that may read and then update any of the rows in a table. This type of access requires many lock conversions. First, the instance executing the transaction must obtain a shared PCM lock on the data block. This lock request may cause a lock downgrade operation on another node. The instance executing the transaction must finally obtain an exclusive mode PCM lock when the UPDATE is actually performed.

If user transactions on different nodes modify data blocks locked by the same PCM lock concurrently, there will be a noticeable performance penalty. In some cases you can reduce contention by creating additional hashed PCM locks. In large tables, however, hardware and practical limitations may mean that the number of hashed PCM locks you can effectively use may be limited. For example, to reduce false contention you would need millions of hashed PCM locks. However, memory limitations and startup time make this impossible. In these cases, fine grained or releasable hashed locks may be a good alternative.

For this type of table, if none of the table's index keys are updated, then the index's PCM locks are only converted to shared mode and thus require few PCM locks.

See Also:

For more information, please refer to "Implement Hashed or Fine Grain Locking".  

INSERT, UPDATE, or DELETE Tables

Transactions on random INSERT, UPDATE and DELETE tables require reading a number of data blocks and then modifying some or all of the data blocks read. This process for each of the data blocks specified again requires converting the PCM lock to shared mode and then converting it to exclusive mode upon block modification. This process has the same performance issues as random SELECT and UPDATE tables.

For this type of table more performance issues exist for two main reasons: index data blocks are changed, and contention occurs for data blocks on the table's free list.

In INSERT, DELETE and UPDATE transactions that modify indexed keys, you need to maintain the table's indexes. This process requires the modification of additional index blocks--and so the number of potential lock converts increases. In addition, index blocks probably require additional lock converts since users on other nodes will be using the index to access other data. This applies particularly to the initial root components of the index where block splitting may be taking place. This causes more lock converts from null to exclusive and vice versa on all nodes within the cluster.

If the INSERT and DELETE operations are subject to long-running transactions, then there is a high chance that another instance will require read consistency information to complete its transactions. This process forces yet more lock conversions as rollback segment data blocks are flushed to disk and are made available to other instances.

Index block contention involving high lock convert rates must be avoided at all costs if performance is a critical issue in your OPS implementation.

Index block contention can be made more extreme when using a sequence number generator to generate unique keys for a table from multiple OPS nodes. When generating unique keys, make the instance number part of the primary key so each instance performs INSERTs into a different part of the index. Spreading the INSERT load over the full width of the index can improve both single and multiple instance performance. Do this using reverse key indexes.

In INSERT operations, allocation of free space within an extent may also cause high lock convert rates. This is because multiple instances may wish to insert new rows into the same data blocks, or into data blocks that are close together. Contention occurs if these data blocks are managed by the same PCM lock. To avoid this, either partition the tables and indexes so different instances use them, or create tables to allow use of multiple free lists and multiple free list groups.

See Also:

Chapter 17, "Using Free List Groups to Partition Data" and Oracle8i Concepts.  

Planning the Implementation

Having analyzed the application workload, you can now plan the application's OPS implementation. Using the access profile you can see which transactions will run well over multiple nodes, and which transactions should be executed within a single node. In many cases compromises and trade-offs are required to ensure that the application performs as needed.


Note:

Load balancing between nodes should not be the main objective. Whereas load balancing is useful in benchmarking situations, it may not be useful in a real-world application. Partitioning is the key to performance in OPS.  


Partitioning Guidelines

This section covers the following topics:

Overview

The database designer must clearly understand the system performance implications and design trade-offs made by application partitioning. Always bear in mind that your goal is to minimize synchronization: this will result in optimized performance.

As noted earlier, if you minimize the number of lock conversions, OPS' performance will be predictable and scalable. By partitioning the application and/or data, you can create and maintain cache affinities of database data with respect to specific nodes of a cluster. A partitioned application ensures that a minimum number of lock conversions are performed, thus pings and Integrated Distributed Lock Manager (IDLM) activity should be very minimal. If excessive IDLM lock activity occurs in a partitioned application, your partitioning strategy may be inappropriate, or the database creation and tuning process was incorrect.

Application Partitioning

Many partitioning techniques exist to achieve high system performance. One of the simplest ways to break up or partition the load upon the database is to run different applications that access the same database on different nodes of the cluster. For example, one application may only reference a fixed set of tables that reside in one set of datafiles, and another application may reference a different set of tables residing in a different set of datafiles. These applications can be run on different nodes of a cluster and should yield good performance if the datafiles are assigned different PCM locks. There will be no conflict for the same database objects (since they are in different files) and hence no conflict for the same database blocks.

This scenario is particularly applicable to applications that during the day support many users and high OLTP workloads, and during the night run high batch and decision support workloads. In this case, you can partition applications among the cluster nodes to sustain good OLTP performance during the day.

This model is similar to a distributed database model, where tables that are accessed together are stored together. At night, when it is necessary to access tables that may be partitioned for OLTP purposes, you still can exploit the advantages of a single database: all the data is stored effectively within a single database. This should provide improved batch and decision support, better query performance, reduced network traffic, and fewer data replication issues.

With this approach, you must ensure that each application's tables and indexes are stored such that one PCM lock does not cover any data blocks used by both applications. Should this happen the benefit of partitioning would be lost. To correct the situation, store each application's table and index data in separate datafiles.

Applications sharing a set of SQL statements perform best when they run on the same instance. Because shared SQL areas are not shared across instances, similar sets of SQL statements should run on one instance to improve memory usage and reduce parsing.

Data Partitioning

Sometimes the partitioning of applications between nodes may not be possible. As an alternative approach, you can partition the database objects themselves. To do this effectively, you must analyze the application profile in depth. You may or may not need to split a table into multiple tables. In OPS, the partitioning process can involve horizontal partitioning of the table between predefined key ranges.

In addition to partitioning and splitting database objects, ensure that each user transaction is executed by the correct OPS instance. The correct node for execution of the transaction is a function of the actual data values being used in the transaction. This process is more commonly known as data-dependent routing.

The process of partitioning a table for purposes of increasing parallel server performance brings with it various development and administration implications.

From a development perspective, as soon as the table is partitioned, the quantity and complexity of application code increases. In addition, partitioning a table may compromise the performance of other application functions, such as batch and decision support queries.

You can accomplish data-dependent routing in one of two ways: if the partitioning of the tables fits well within actual partition usage patterns, in other words, you partitioned the table by state or call center, and users are similarly partitionable, then you can accomplish manual routing by having users connect to the correct instance. Otherwise, the administration of data-dependent routing may be complex and can involve additional application code.

You can simplify the process if the application uses a transaction processing monitor (TPM) or RPC mechanism. It is possible to code into the configuration of the TPM a data-dependent routing strategy based on the input RPC arguments. Similarly, this process could be coded into procedural code using a case statement to determine which instance should execute the transaction.

See Also:

"Client-Server Systems" and Oracle8i Tuning for more information about partitioning.  




Prev

Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index