This material on VSS is an archive from fall 1998, about two decades ago. First I changed jobs so I no longer had any association with VSS. Then after a couple more years I changed careers completely so I wasn't associated with high tech at all. Then after even more years I retired altogether. I've left the VSS material on this site for use by others, even though I no longer remember exactly what it's about, or even understand some of it.
VSS continues to be used and to provide integration with other products, even though the VSS software product itself has been retired. Almost all VSS consulting services have disappeared. Microsoft Mainstream Support is no longer available for any version of VSS. And the end is near even for Microsoft Extended Support for VSS2005. As a result, roll your own support may become more important, and the experience with VSS related by these pages may turn out to be quite useful to some despite its great age.
This material reflects experience with VSS5. The user interfaces and the feature set in VSS6 were virtually identical, so almost all of this material continued to apply for several years. But I never had any detailed knowledge of the user interfaces or feature set of VSS2005, and so simply assumed most parts of this material still applied.
These are recommendations for setting up automated unattended builds. A specific script that implements these recommendations is available as a concrete example.
These recommendations have been divided into several categories:
While implementing an automatic build system you will run into lots of different problems and start to think like a "generalist" who has a broader range of questions than most developers can even imagine. So get a recent MSDN Library Library and learn how to search through it and use it heavily. Learn how to search Usenet archives and other Internet groups, especially groups comp.software.config-mgmt and microsoft.public.visualsourcesafe. The microsoft.public.visualsourcesafe Usenet newsgroup spawned an FAQ document.
Do not do anything incrementally. Have no dependencies on files from a previous build. Do not use any files from a previous build even if they exist. Do not try to figure out "what changed" and recompile only those modules. Do everything from scratch every time.
Trying to do only the minimum amount of work necessary makes the build process ten times more complicated. If builds take too long, run builds overnight or throw hardware at the problem or both. After all, it's the machine doing the work not you. Builds have been done incrementally for several decades. The tradition is deeply entrenched, and many folks don't even realize there's any other way. But machines are so fast and disks are so cheap these days the tradeoff just isn't worth it. If you try to make your build process "smart" you'll spend most of your time debugging problems where the automated build did either too much or too little.
Avoid changing major tool revisions for any particular piece of code. If a piece of software was developed and tested with version N, then always build it with version N ...especially when building some sort of "patch" to a version that was aleady released. Tool vendors will usually say their newest tool is totally upward compatible. Don't believe it. In my experience changing tools always results in weird bugs, extensive debugging, and eventually retesting the entire product. Usually there is some way to allow both the older version and the newer version of a set of tools to coexist on the same machine; use it.
If for performance you have some mechanism whereby the source control system will fetch only changed files rather than all files, be sure to keep that mechanism out of any scripts or build instructions that might be used by anyone else. Automatic fetching of changed files from a source control repository is a great source of errors and frustration to individual developers, and should be avoided for everything except the automated builds done on behalf of the entire group. Avoid it even if the documentation for the source control system presents an example of how to implement it.
Use a machine on which no other server functions --especially not source control-- reside. If possible use a completely dedicated machine. Give the machine plently of disk and plenty of RAM so you don't run out of storage space all the time and so you don't wait forever for a simple build to complete.
Do all your scripts in one language. Depending on what you have and what you're comfortable with, use PERL, a Unix-like "shell", a third party add on tool like WinBatch, or the Command Shell ("extended" batch language) in WinNT4. Don't try to use the original batch language available on Win31 and Win9x and WinNT3x. Also you probably want to avoid things like WSH (Windows Scripting Host) and Visual Basic. I find them far too verbose, and trying to make build automation "object oriented" is like hitting a pin with a sledgehammer.
Although the new Command Shell ("extended" batch language) that first became available in WinNT4 still has plenty of quirks, it's a reasonable choice for implementing serious scripts, while the original DOS/Windows batch language is not. While the Command Shell is usable by itself, it really takes off when augmented by something like the NT Resource Kit. The usability of the Command Shell is so closely tied to a set of external power tool-like commands that you may want to require the package that supplies the additional commands as part of your minimal software platform. A good description of that version of the new Command Shell and how to use it is the book Windows NT Shell Scripting by Tim Hill.
(The Command Shell was further refined in later versions of Windows. Notable additions included the ability to easily and more intelligently remove the quotation marks surrounding some arguments, some control over when variables are evaluated and substituted, and an optional else clause on if statements. [Because the NT4 and later enhancements were so extensive, backward compatibility was awkward. It was sometimes difficult to make a new script run correctly on an old version of Windows.] Despite the improvements, the use of the Command Shell never caught on, and once again these days attempting to implement complex scripts in BATch is generally considered a newbie mistake.)
Especially if you select something like the DOS/Windows batch language, insist that your minimal build platform be at least WinNT4 with command extensions so you can use the Command Shell exclusively. Don't waste your time trying to accommodate Win95/98 or WinNT351 systems with the old DOS/Windows batch language (without the "command extensions") ...at best you'll get a really bad headache.
Remember that just because the product runs on 39 different platforms doesn't mean you have to try to build it on all those different platforms. For example if you're building for Win31, Win95, Win98, WinNT351, WinNT4, and WinNT5, you can do all of the building on just one platform (probably WinNT4). Let the code jockeys worry about platform portability of the final result; don't let it be something you worry about when building.
You may also want to add a dependency on the NT Resource Kit, on MKS Toolkit, or various old and/or shareware tools. In my experience none of these are absolutely necessary, but they are sure nice, especially the ones that "match" the you chosen scripting language. Don't make any of these "optional", trying to use them if they're present on the system but get along without them if they're not. Either require them, or don't use them at all. If you do have a requirement like this, I suggest you check for the presence of the needed tools right up front and give up with a clear message if they're not there. Bumbling ahead with an automated build when a few of the tools aren't there can make a giant mess and waste a lot of your time figuring out what happened.
Try to localize all your system differences to explicit individual environment variables. There's not much as frustrating as putting together an automated build system that will only work right with one particular PATH setting which you've tweaked and tweaked again for a month, only to have some eager beaver accidentally reset the PATH. PATH is too long and complicated; avoid dependencies on it if you can. Put those things you need in individual variables so they can't all get wiped out in one fell swoop and so they aren't interdependent.
If your script calls a setup such as VCVARS32 repeatedly, take care that the PATH doesn't eventually overflow after being added to every time. Find a PATH trimmer to remove duplicates. Or find an intelligent APPEND that will make an addition only if the setting isn't currently there. Or figure out a way to decide if the PATH is already okay and so resetting it can be skipped.
Don't have any reliance on the settings of system wide environment variables such as LIB or INCLUDE. If you invoke MSDEV you don't have to worry about this, as it uses its own internal settings rather than the ones in the environment. If you call the tools some other way though --for example from NMAKE-- let VCVARS32 clear and reinitialize these environment variables every time, then if necessary append to them whatever settings are necessary for the immediate build step.
The AT command that comes with WinNT is barely adequate for kicking jobs off on a clock when you're not there. You'll probably have so many permissions problems you'll regret using it. Instead, use either WinAT from the NT Resource Kit or a third party product such as LaunchPad.
Create a new userid that doesn't correspond to any real person (ex: "builder") and use it for all your logons, file permissions, sending email, etc. Always use this same logon, so you always have the same file permissions on other systems. Make this a "full" user (probably defined in some NT domain), not something you hack together quickly by defining a user just on one system.
But try not to do anything that relies on this user being configured any particular way. If you have to have a user set some particular way (some obscure VSS initialization variable, something in the "profile", etc.) create a second new user for this purpose (ex: "configmgmt"). Use it only where you really need it, and never use it interactively.
The "Configuration Builder" that used to come with most source control systems is typically nothing more than a variant of "make" that can look at the dates inside files in the source control repository.
The only reason you'd want to use one is to have the computer make intelligent choices about what's changed and so must be recompiled. If you follow the above recommended strategy of rebuilding everything rather than relying on an "intelligent incremental" build, you don't need a Configuration Builder. (In fact, since Configuration Builders are generally just an "extra" for source control systems, you're better off with a separate tool that isn't tightly linked to the source control system, as it's more likely to be up to date and may be better supported.)
Accessing your source control system --for example VSS--
should be only a tiny fraction of your automated build and test
system. Once you can work something like
SS GET xyz from the command line,
you needn't do hardly anything else. I get a complete fresh
copy of all the source from VSS
Although doing this takes over an hour to execute and so seems real
significant, it's actually only a very few lines of the script.
Keep all source code, SDKs used, and documentation source under source control. Do not keep any files you rebuild (EXE, DLL, OCX, maybe viewable documentation, etc.). You can handle tools either by keeping them under source control or by installing them from a known good archive location, for example the vendors' original distribution CDs with all install time defaults taken. I personally do not keep tools (VC++, MSDEV, InstallShield, etc.) under source control. Instead they are installed on all machines in known ways, either in known locations or with environment variables pointing to the alternate locations.
The only source control operation that's a little more complex
is "checking out"
some file that has a build number in it, incrementing that
number, then "checking in" the file before proceeding to GET
everything. To do it robustly and with error checking takes
about 20 lines, of which something like 3 lines actually
invoke various VSS functions. Initially I did this either with
a shareware text editor or with a small custom utility program.
But later I found that the
SET /A xyz
command in NT4 works just fine. So now I can do it without
ever invoking something that doesn't come standard with the OS.
If you're using Microsoft VC6 (part of Visual Studio 6), look at the /MAKE command line switch for MSDEV. It lets you instruct MSDEV to do almost all the work of building ...and it means you can mostly use exactly the same build instructions as your developers do, so you don't have to maintain two parallel systems or worry about a developer shrugging you off with "works fine for me". When I converted a build script from using VC5 to using VC6, I was able to replace about 15 lines of a BAT file with just a single line that invoked MSDEV.
If you're using other tools or an older version of VC so the batch capabilities of MSDEV aren't available to you, get a "make" utility and make heavy use of it. For example locate a copy of Microsoft's NMAKE (it was significantly enhanced in Visual Studio 97 Service Pack 3 before it was deemphasized by Microsoft, so get that version or later). Most developer tools have an option somewhere to output "Makefiles" that do the right thing. Find it and use it. With any luck, you'll actually have to hand-code almost zero Makefiles yourself.
Keep copious logs of both progress messages and error messages. Include the top level of these logs in announcements you send to others so they can identify problem areas and figure out roughly what each problem is without having to get more information. Keep all the lower level logs around on disk where you can access them. You should be able to locate and fix virtually any problem just by looking through the detailed logs. If you find yourself frequently rerunning parts of the build, your logs aren't detailed enough.
Report results, both to yourself and to others, via email. To have an unattended process under a Windows-like operating system send email, you'll need to get a batch email adapter such as MAPISEND.EXE. If the batch email adapter you use supports only a very small message size, put most of the information in "attachments".
Don't try too hard to make the script figure out whether or not a build was successful. About all you can expect the automated scripts to do with reasonable accuracy is report "complete" or "incomplete". If you try to do much more detailed analysis of results, you will spend an inordinate amount of time on this one aspect of the scripts, and they will probably report incorrectly every once in a while anyway.