
Second-level design
 ,QWURGXFWLRQ
The objective of first-level design is to develop a logically simple, redundancy-free
model which will form a sound basis for subsequent stages of design. Second-level
design is concerned with amending, or flexing, the model to improve its performance, at
least to a level at which it is compatible with functional analysis constraints, notably
storage requirements and transaction processing times. Flexing should also take
account of the general design objectives. The first-level design might be left as it is if the
main objective is flexibility, but substantial changes may be necessary if efficiency of
performance is crucial.
The approach taken here is to do second-level design before the model is mapped into
a form which satisfies the constraints of a particular database management system
(DBMS). This design sequence has the merit that the issues are not clouded by concern
over the detailed structure of the DBMS, but the disadvantage that features of the
DBMS may subsequently turn out to have a marked effect on the justification of second-
level design decisions. The latter is often particularly true of the mechanisms provided
for representing relationships. In practice, it is often sufficient to keep an eye on just the
main features of the target DBMS, but for some DBMS it may be better to defer second-
level design until after the first-level model has been mapped into DBMS form.
The second-level design procedure discussed below takes a more quantitative
approach than the first-level procedure, but any calculations required are restricted to
relatively straightforward estimations of trade-offs in storage requirements and
transaction processing times. It is best to concentrate on the critical constraints, such as
those transactions which need the fastest response times, or those tables which occupy
the most storage. Having flexed the model into a state which can satisfy these critical
constraints, it can then be checked out against those that are less critical to see whether
further flexing is necessary.
Transaction processing time can be estimated from the average number of accesses
to table occurrences per transaction and the average access time of the probable storage
device, making due allowance for the type of access (e.g. direct or sequential) which is
likely to be used, and taking account of plausible blocking factors. Storage
requirements can be estimated from table sizes, with some rule of thumb allowance for
overheads such as overflow space or indexes which may be used in the physical design.
The aim of making these estimates is not to come up with accurate figures, but to weed
out designs that are clearly infeasible. In practice, second-level design decisions can
often be made by inspection rather than by calculation.

Get Data Analysis for Database Design, 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.