As we have tried to demonstrate throughout this book, Perl scripts can save an enormous amount of time and frustration if you are faced with the task of maintaining or configuring a large number of Windows NT workstations. Once you have put the initial effort into setting up a protocol and infrastructure for your scripting, even the most complicated of tasks can be readily accomplished without a hitch. From time to time, however, it is almost inevitable that something will go wrong. Even if you are meticulous when creating your code and you test every line that you write, occasionally something will have an unanticipated side effect and cause a great deal of frustration. At this point you will ask yourself why you ever bothered with all this scripting nonsense! The answer, of course—as you will know deep down—is that the vast majority of the time, things do not go wrong; on balance, implementing pretty much anything with a well-designed script leads to a far smoother outcome than if you try to achieve the same effect manually. Unfortunately, we cannot give you a magic formula to ensure that your scripts never go wrong. What we can do, however, is give you a few guidelines for safe scripting and point out a few things that tend to cause problems. Ensuring as much as possible that scripts leave workstations in a stable, secure state is the subject of this chapter. We will begin with some general guidelines for scripting and suggest how to avoid some of the more common errors. In the second half of the chapter, we will address the issue of script security and how to prevent a malicious hacker from turning your own scripts against you.
It may be impossible to predict the exact effect your maintenance and configuration scripts will have under all conceivable circumstances, but it is normally very easy to predict the most likely outcome in almost all circumstances.* After all, if this were not the case, there would be no point in scripting at all. If there is a trick to making sure your scripts work, it is to have a consistent development and deployment strategy; then at least, if things go wrong, you will know exactly what you have done (and therefore what needs to be undone).
The most important tool you can possibly possess when you are writing administration scripts is patience. This may seem like quite a weird statement to make, given that the main purpose of scripting is to save time, but as we have said before and will no doubt say again, the savings comes from the fact that a script has to be written only once and emphatically not because it is quick to write in itself. It is so tempting to sit down, rush out a simple script, tweak it until it works, deploy it on your workstations, and get on with the next job. However, this is the classic way of prompting disaster. Just as you would never type del *.* /q at the command prompt of your main server without checking and double-checking that you are in the right place in the directory hierarchy, you should never deploy a Perl script on even a single workstation unless you are absolutely sure what it will do and what it will do it to.† Remember that a badly written script running with administrative or system privileges on a workstation can cause just as much damage, if not more. In short, a methodical, calm approach to scriptwriting is always worth the effort.
Next, we present a set of guidelines for script development that, in our opinion, greatly limit the potential for making a serious mistake. The formula we propose is quite strict, and it is inevitable that you will deviate from it from time to time. Nevertheless, it provides an idealized framework that you should keep in the back of your mind whenever you are writing scripts:
A further point to bear in mind is that you should always make your scripts readable. It can often be tempting to write obscure, clever bits of code that look impressive on the page, but this is generally not a good thing to do. Not only is it a pain when you (or someone else) has to modify the script at a later date, but it also makes it very difficult to debug. Ensuring that your code is transparent and easy to comprehend is a challenge that you should always take up. If you do use weird code, employ a particularly complicated regular expression, or exploit an idiosyncrasy of a module that you are using, supply copious comments (#). It is always worth giving a colleague your script to read and asking him to tell you what it does; if he can't, it is probably not clear enough.
Once you have written and tested your script and are confident that it does what it should, it is ready to be deployed on your workstations. If it is designed as a tool to be used manually by administrators on an ad hoc basis, deployment may simply involve placing it on a server drive or even a floppy disk; it can then be run from the command line when required. If, however, the script is meant to carry out maintenance on a fleet of workstations (either as a one-off exercise or on a regular basis), it will need to be installed on all relevant workstations. Depending on how you have set up your workstation environment, this may consist of dropping the script into a server drive (to be run by a stub—see Chapter 3, Remote Script Management) or installing it directly, using one of the methods described in Chapter 2, Running a Script Without User Intervention. Whichever method you choose, there are a few helpful things to bear in mind:
We hope that by now the message is clear: methodical thought and careful planning make certain that scripting is an administrator's savior and not a troublesome liability!
At several points in the book we have made reference to the security implications of various aspects of scripting. As yet, however, we have not discussed script security as a subject in its own right, despite the fact that it is clearly an extremely important issue. For the remainder of this chapter, we highlight some important security issues that arise when you use scripts to carry out administrative tasks and suggest how to avoid falling into pitfalls.
Scripts cannot just run by themselves; they have to be run by somebody (or by something). Just like any other program, a running script adopts the security context of that somebody (or something) that ran it. The implication of this is that the extent to which even a disastrous script can wreak havoc is strictly limited by the extent to which its owner has control over a workstation.* This is an extremely comforting thought: even if you have made a terrible mistake while writing a script (for example, creating a del *.* /q scenario), nothing serious can possibly go wrong provided you execute the script in the security context of a user who lacks dangerous privileges (like full control over the file system!). The only slight snag is that if you restrict your scripts in this way, they will probably not be able to function at all. For example, a script whose purpose is to delete the contents of temporary directories cannot carry out its task unless it has delete permissions on the file system (or at least on the relevant part of the file system).
Techniques we have shown you so far in this book involve editing the registry, reading event logs, writing to the file system, and all sorts of things that require administrative permissions in order to do their job. We have thus far avoided the issue by assuming an Administrator or LocalSystem security context (the default context for anything running as a service). In the real world, however, it may well be very foolhardy to allow all your scripts to run in such powerful contexts. In anything but the most trivial situations, it is worth thinking very carefully about the security context in which scripts should run; this normally involves setting up a special account with permissions optimized for scripting.
In an ideal world, every script you ever use will run in a security context designed especially for it; the aim would be to allow it to carry out all required tasks but nothing more. Realistically, however, this is not possible or even that desirable: not only would it require a huge effort to create special user accounts every time you created a script or modified an existing one, but the complications would greatly increase the chances of something going wrong. A far better solution in most situations is to have a single account set up for scripting; this account would have all the permissions that tend to be required for scripting but no more than required. In a particularly sensitive environment, you could even have a handful of special accounts, each associated with a stub script and a set of permissions; whenever you add a new script, you add it to the appropriate stub directory depending on the amount of power the script needs. When making decisions about a script's security context, what is required is a balance between safety and functionality.
When creating an account for scripting, the issues are not necessarily localized on a workstation. An average script may require only permissions on a workstation, but some (notably stub scripts) also require read access to a server share; a script that carries out a reporting or archiving task may even require write access to a server share. Throwing caution to the wind, an obvious way to deal with this would be to create a domain account that has administrative permissions. As we've said before, however, the obvious solution is not always the best. If you create a domain account with administrative permissions, these will apply on all computers within the domain; that means not only to workstations but also to servers and even domain controllers. The potential for serious damage if a script running in such a powerful security context goes wrong—or if it is compromised by a malicious cracker—are horrendous.
Bearing in mind that many scripts need lots of permissions on the workstations on which they are running, what is the alternative? One feasible possibility is to create two types of scripting accounts, one on the server side and one on the workstation side. If both types of accounts share a username and password, a script running on a workstation will be able to connect to server shares transparently but will not have administrative rights on these shares (or the servers that contain them). The advantages of this scenario are clear:
In short, the balance between safety and freedom to carry out necessary functions for a script is best met by creating a separate local account on each workstation to be scripted. This account should have a flexible enough set of permissions to handle most scripting requirements. Equivalent server accounts should be set up with a highly restricted set of permissions, so that if workstation security is compromised, your entire network is not at risk.
Creating special accounts is an essential requirement for safe scripting, but this is only one aspect of script security. It is equally important to ensure that nobody can tamper with script directories on servers or workstations, inadvertently (or deliberately) turning a safe script into a lethal one. In some cases, even allowing users to read a script constitutes a serious breach of security. A few reasons for this are outlined here:
We apologize if we seem to be painting a rather miserable picture, and we certainly do not want to seem alarmist, but security is always an issue to be taken seriously. The good news is that it is not very difficult to ensure that your scripts are free from prying eyes and malevolent users. The key to protecting yourself from a script-based attack is to ensure that the access control list for each script and for a stub-script directory grants access only to those people who need it. In most circumstances, this will be restricted to the account in whose security context scripts actually run, and the Administrator group. Further, the script account on a server should have only read permissions on the directory containing scripts for updating. This simple rule, in conjunction with awareness of the issues and a bit of common sense, should ensure that administration scripts do not compromise your network security at all.
As an administrator who makes heavy use of scripting to carry out maintenance and configuration tasks on Windows NT workstations, you can save yourself enormous amounts of time and frustration. Your workstations are also likely to have far better track records for consistency and reliability than those of your nonscripting colleagues; a well-written, sensibly deployed script will behave more predictably than any human can! However, the power of scripting brings with it some potential pitfalls: an ill-conceived, buggy script running with administrative privileges can cause a catastrophe very quickly indeed. Further, careless deployment could lead to the sort of security hole on your network that will make you extremely unpopular with your boss. In this chapter we have discussed strategies that you can use to avoid falling into any serious pitfalls and to ensure that all of your scripts run safely and securely.
*Occasionally, the most obscure anomaly in software or even hardware design will make it totally impossible to predict the outcome of even a straightforward(ish) computation. Remember the Pentium FDIV bug?
*You can, of course, combine the two types of sin by writing a Perl script that accidentally invokes del *.* /q on completely the wrong part of the directory hierarchy. One of the authors of this book has suffered at the hands of just such a script written by the other. The script was meant to delete stuff selectively based on a regular expression search. Despite assurances from the one who wrote the script that it worked perfectly, the one who ran it managed to get the whole of his home directory wiped. If you can guess which author is which, we'll send you a free copy of the script. You can even have distribution rights!
*We should make it clear that we are certainly not against fresh ideas; we actively encourage them. However, these ideas should be incorporated into a well-considered strategy, not just implemented in an ad hoc fashion.
*Running Perl with this flag actually produces a lot of very useful diagnostics. For full details, see the perldiag page in the online documentation supplied with the ActiveState distribution.
*We refer here to the owner of the running process; this is not necessarily the same as the owner of the file.
*If you are connecting to a server resource and sending plain-text passwords over the network, bear in mind the possibility that some cracker somewhere on your subnet might have a promiscuous network card.