The vote is in! (CHECK CONSTRAINTS vs. TRIGGERS vs. STORED PROCEDURE business logic)
Thank you for everyone who "voted" - a great day to talk about the issues of the day :).
After much research, I've found that we live in a divided country. Some of us like triggers. Others wouldn't be caught dead with a trigger and do lots of business logic validation within stored procedures. Some of you consider CHECK constraints, depending on what else is available on the ballot. Nobody uses CHECK constraints for all of their validation, though a suprising few use some of the lesser-known and unsupported parts of CHECK constraints in SQL Server to get the job done.
My original purpose for asking about business logic validation was to see how many people use CHECK constraints (and to ponder possible future features in the area). What I learned was that there are lots of emotions, questions, and opportunities for SQL Server around business logic validation.
Here's a rough summary of what I've heard from customers.
1. Applications that are stored-procedure based (often OLTP applications of various sizes) usually put a good chunk of the business validation logic in the server within these stored procedures. The good part about this design is that all of the logic can be centralized on the server. This usally aligns well in organizations where the DBA job is separate from the application developer job (as the DBA can use this design to validate data coming across that trust boundary). The challenges with this design are that writing T-SQL code can be more difficult, that similar routines may have to duplicate logic, and that this may be "slow" (which I won't even try to define, but we can accept that it takes time to validate constraints of various forms, no matter the application).
2. Applications that use triggers often do similar things. Curiously, I didn't find anyone actively building such designs, so most of these were legacy. I don't know if I should read anything into this or not (people building new systems with triggers - please mail me!).
I didn't see a specific delineation between why one would pick one path or the other. My sample size is probably not quite big enough to draw conclusions about this.
3. Some people use CHECK constraints. Often this was the result of a DBA trying to deal with issues across the "trust boundary" between database developers and DBAs. Many people didn't know much about CHECK constraints, but the ones who used them typically swore by them.
4. (no customers sent this to me in my latest inquiry, but I'll add this to the discussion). As databases start to get into larger and larger scales, it becomes necessary to factor portions of the application to make it more loosely coupled. This typically relates to how many CPUs you can stuff onto a single machine, so it is more of a problem at the high-end. Service-Oriented Architectures speak to this issue. In Business Logic Terms, this often moves to the mid-tier as applications tend to scale. That doesn't mean that you should try to start with it there - most applications never get to that scale. It does make sense to factor your business logic so it can be moved/reimplemented in a different tier.
My advice to people considering new applications today is to think about the data constraints more explicitly than many people do today. Even if unenforced, going through the exercise of saying what the database data *should* look like can find issues in your application. I get the impression that this is a bit of a lost art.
One last point - PRIMARY KEY, UNIQUE, CHECK, and FOREIGN KEY constraints are all used by the optimizer in plan generation. These can be helpful in plan generation, but maintaining those constraints is considered expensive for some customer scenarios, such as large data warehouses, where each index is scrutinized. I'd recommend considering database-level constraints in non-DW applications, where possible. It can often to lead to substantial improvements in certain query plans.