Oracle8(TM) Server Tuning
Release 8.0

A54638-01

Library

Product

Contents

Index

Prev Next

1
Introduction to Oracle Performance Tuning

The Oracle Server is a sophisticated and highly tunable software product. Its flexibility allows you to make small adjustments that affect database performance. By tuning your system, you can tailor its performance to best meet your needs.

This chapter gives an overview of tuning issues. Topics in this chapter include:

What Is Performance Tuning?

Performance must be built in! Performance tuning cannot be performed optimally after a system is put into production. To achieve performance targets of response time, throughput, and constraints you must tune application analysis, design, and implementation. This section introduces some fundamental concepts:

Trade-offs Between Response Time and Throughput

Goals for tuning vary, depending on the needs of the application. Online transaction processing (OLTP) applications define performance in terms of throughput. These applications must process thousands or even millions of very small transactions per day. By contrast, decision support systems (DSS applications) define performance in terms of response time. Users of DSS applications make dramatically different kinds of demands on the database. One moment they may enter a query that fetches only a few records, and the next moment they may enter a massive parallel query that fetches and sorts hundreds of thousands of records from various different tables. Throughput becomes more of an issue when an application must support a large number of users running DSS queries.

Response Time

Because response time equals service time plus wait time, you can increase performance in two ways:

Figure 1-1 illustrates ten independent tasks competing for a single resource.

Figure 1-1: Sequential Processing of Multiple Independent Tasks

In this example only task 1 runs without having to wait. Task 2 must wait until task 1 has completed; task 3 must wait until tasks 1 and 2 have completed, and so on. (Although the figure shows the independent tasks as the same size, the size of the tasks will vary.)

Note: In parallel processing, if you have multiple resources, then more resources can be assigned to the tasks. Each independent task executes immediately using its own resource: no wait time is involved.

System Throughput

System throughput equals the amount of work accomplished in a given amount of time. Two techniques of increasing throughput exist:

Wait Time

While the service time for a task may stay the same, wait time will go up as contention increases. If many users are waiting for a service that takes 1 second, the tenth user must wait 9 seconds for a service that takes 1 second.

Figure 1-2: Wait Time Rising with Increased Contention for a Resource

Critical Resources

Resources such as CPUs, memory, I/O capacity, and network bandwidth are key to reducing service time. Added resources make possible higher throughput and swifter response time. Performance depends on the following:

Figure 1-3 shows that as the number of units requested rises, the time to service completion rises.

Figure 1-3: Time to Service Completion vs. Demand Rate

To manage this situation, you have two options:

Effects of Excessive Demand

Excessive demand gives rise to:

If there is any possibility of demand rate exceeding achievable throughput, a demand limiter is essential.

Figure 1-4: Increased Response Time/Reduced Throughput

Adjustments to Relieve Problems

Performance problems can be relieved by making the following adjustments:

adjusting unit consumption  

Some problems can be relieved by using less resource per transaction or by reducing service time. Or you can take other approaches, such as reducing the number of I/Os per transaction.  

adjusting functional demand  

Other problems can be abated by rescheduling or redistributing the work.  

adjusting capacity  

Problems may also be relieved by increasing or reallocating resource. If you start using multiple CPUs, going from a single CPU to a symmetric multiprocessor, you will have multiple resources you can use.  

For example, if your system's busiest times are from 9:00 AM until 10:30, and from 1:00 PM until 2:30, you can plan to run batch jobs in the background after 2:30, when more capacity is available. In this way you can spread out the demand more evenly. Alternatively, you can allow for delays at peak times.

Figure 1-5: Adjusting Capacity and Functional Demand

Who Tunes?

Everyone involved with the system has some role in the tuning process. When people communicate and document the system's characteristics, tuning becomes significantly easier and faster.

Figure 1-6: Who Tunes the System?

Decisions made in application development and design have the most impact on performance. Once the application is deployed the database administrator usually has the primary responsibility for tuning--within the limitations of the design.

See Also: Chapter 3, "Diagnosing Performance Problems in an Existing System" for keys that can help database administrators (DBAs) to identify performance problems and solve them reactively.

Setting Performance Targets

Whether you are designing or maintaining a system, you should set specific performance goals so that you know when to tune. You can needlessly spend time tuning your system without significant gain if you attempt to alter initialization parameters or SQL statements without a specific goal.

When designing your system, set a specific goal: for example, an order entry response time of less than three seconds. If the application does not meet that goal, identify the bottleneck causing the slowdown (for example, I/O contention), determine the cause, and take corrective action. During development, you should test the application to determine if it meets the designed performance goals before deploying the application.

Tuning is usually a series of trade-offs. Once you have determined the bottlenecks, you may have to sacrifice some other areas to achieve the desired results. For example, if I/O is a problem, you may need to purchase more memory or more disks. If a purchase is not possible, you may have to limit the concurrency of the system to achieve the desired performance. However, if you have clearly defined goals for performance, the decision on what to trade for higher performance is simpler because you have identified the most important areas.

Setting User Expectations

Application developers and database administrators must be careful to set appropriate performance expectations for users. When the system carries out a particularly complicated operation, response time may be slower than when it is performing a simple operation. In cases like this, the slower response time is not unreasonable.

If a DBA should promise 1 second response time, consider how this might be interpreted. The DBA might mean that the operation would take 1 second in the database--and might well be able to achieve this goal. However, users querying over a network might experience a delay of a couple of seconds due to network traffic: they will not receive the response they expect in 1 second.

Evaluating Performance

With clearly defined performance goals, you can readily determine when performance tuning has been successful. Success depends on the functional objectives you have established with the user community, your ability to objectively measure whether or not the criteria are being met, and your ability to take corrective action to overcome any exceptions. The rest of this tuning manual describes the tuning methodology in detail, with information about diagnostic tools and the types of corrective actions you can take.

DBAs who are responsible for solving performance problems must keep a wide view of the all the factors that together determine response time. The perceived area of performance problems is frequently not the actual source of the problem. Users in the preceding example might conclude that there is a problem with the database, whereas the actual problem is with the network. A DBA must monitor the network, disk, CPU, and so on, to find the actual source of the problem--rather than simply assume that all performance problems stem from the database.

Ongoing performance monitoring enables you to maintain a well tuned system. Keeping a history of the application's performance over time enables you to make useful comparisons. With data about actual resource consumption for a range of loads, you can conduct objective scalability studies and from these predict the resource requirements for load volumes you may anticipate in the future.

See Also: Chapter 4, "Overview of Diagnostic Tools"




Prev

Next
Oracle
Copyright © 1997 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index