There is a persistent—and often lively—debate in the programming community about the benefits and appropriateness of using stored programs in applications.
Database stored programs first came to prominence in the late 1980s and early 1990s during what might be called the client/server revolution. In the client/server environment of that time, stored programs had some obvious advantages (aspects of which persist in N-tier and Internet-based architectures):
Client/server applications typically had to carefully balance processing load between the client PC and the (relatively) more powerful server machine. Using stored programs was one way to reduce the load on the client, which might otherwise be overloaded.
Network bandwidth was often a serious constraint on client/server applications ; execution of multiple server-side operations in a single stored program could reduce network traffic.
Maintaining correct versions of client software in a client/server environment was often problematic. Centralizing at least some of the processing on the server allowed a greater measure of control over core logic.
Stored programs offered clear security advantages, because in the client/server paradigm, end users typically connected directly to the database to run the application. By restricting access to stored programs only, users would not be able to perform ad hoc operations against tables and other database structures.
The use of stored ...