STAY ALERT

Use Managed Code To Generate A Secure Audit Trail

Mark Novak

This article discusses:

  • Differences between auditing and event logging
  • Auditing from managed code
  • Creating an audit provider
  • Implementing a sample auditing application
This article uses the following technologies:
C#, .NET Framework, Windows Server 2003

Code download available at:Auditing.exe (164 KB)
Browse the Code Online

Contents

Overview of Event Logging
Windows Support for Auditing
Auditing from Managed Code
Building the Resource DLL
Lifecycle Management
Building an Auditing Application

A uditing is indispensable for security-related monitoring of any server-based application, from e-mail servers to databases to Web servers. In today's security-conscious environments, a reliable audit trail is a valuable forensic tool and often a legal requirement for certain industries. For example, regulations such as Sarbanes-Oxley and the Health Insurance Portability and Accountability Act of 1996 (HIPAA) require audit trails for certain systems, applications, and data. The Windows Server™ 2003 operating system provides features that let you enable a wide range of applications to make use of auditing functionality.

Auditing is, in many ways, similar to the well-known Windows® event logs. Despite the apparent similarity, there are important differences between auditing and event logs. First, the APIs used for generating audits are new for Windows Server 2003 and are entirely separate from the APIs used for event logging. Second, from a security standpoint, audit logs are uniquely suitable to tasks requiring tight control over who can generate and read the logs.

I'll start by taking a look at auditing from the operating system perspective. I'll also describe a sample managed code implementation that will allow you to add auditing to your own server applications. The functionality described here requires Windows Server 2003 and the Microsoft® .NET Framework 1.1 or later (it will also work if you install the Authorization Manager Runtime on Windows 2000 Server SP4).

Overview of Event Logging

The event log, a well-known Windows monitoring and reporting mechanism, is a collection of event messages. Each message includes information such as a timestamp, the message type, an event ID, the source of the event, and other data. The message-specific data can be either a binary blob or a set of insertion strings. Each event log message has some static data common to every message with the same event ID, and a set of data that changes from one message instance to another. A typical event log message looks like, "User %1 has been granted %2 access to file %3". In this case, insertion strings could be "Joe", "Read", and "c:\foo.txt", or they could be "Mary", "Write", and "c:\bar.zip". However, the core message remains the same.

If you open the Event Viewer application, you will see that by default there are three event logs in Windows: Application, System, and Security (see Figure 1). These three logs may seem similar, but there are important differences. The Application log is used by applications to log the events specific to them. Each application logs events under a different source name, and you can use that source name for filtering if you want to see only messages logged by a particular application. The System log is used by system services such as DHCP, W32Time, and others. The Security log is where audit messages, such as logon success, logon failure, and object access, are stored.

Figure 1 Security Event Log

Figure 1** Security Event Log **

As an aside, if you look inside the different logs, you will notice that the Application and System logs contain three different types of messages: Error, Warning, and Information. The Security log, however, has very different types of messages: it contains only Success Audit and Failure Audit messages.

The Application event log has weak default protections. The System log has stronger controls over who can write to the log, but it doesn't place many restrictions on who can read the log. For instance, any interactively logged on user can read both Application and System logs and write to the Application log. This means you can place little faith in the true origin of messages in the Application log, and you cannot rely on the messages being disclosed only to trusted parties.

The Security log is unlike the other logs in two important respects. First, in the default configuration it is protected by a strong access control list (ACL) and privilege checks, which limits the set of individuals who can read its contents to local system, administrators, and holders of the Security privilege. Second, and most important, only one entity is allowed to write to the Security log—the Local Security Authority (LSA). This effectively means that every time you attempt to call the RegisterEventSource API for the Security log, you will get an ACCESS_DENIED error, even if you're running as Local System! This design ensures that the Security log contains information from only trusted sources. The Security log can be cleared by an authorized user (usually an administrator), but when that happens, a message is inserted into the empty log listing the user who was responsible for clearing the log.

Windows Support for Auditing

In the not-so-distant past, auditing was the realm of trusted system components only. Audits could be generated by trusted system services such as NetLogon or the Security Accounts Manager (SAM), but not by ordinary user-mode applications. The reason for this, as I had mentioned earlier, is that only the LSA is allowed to generate events in the Security log. The write access restriction is hardcoded inside the operating system, and even changing the ACL on the Security log would not change this behavior. This approach effectively locked application developers out of the Security log.

This limitation turned out to be a major one. Consider the needs of most server applications—database servers, e-mail servers, Web services, and so on. Such applications would dearly love to generate events that cannot be either spoofed or leaked to unauthorized eyes. One solution is to lock down the ACLs on the application log, but that can have unintended consequences for older applications that are not prepared to deal with access failures when accessing the logs. Further, because there is no per-application granularity in the event log access control story, the ACL approach falls short.

The Security log write access limitation was relaxed somewhat in Windows Server 2003 without changing the fundamental design by the introduction of a special set of APIs (see Figure 2). These APIs use Local Procedure Calls (LPCs) internally to interact with LSA, instructing it to generate audit logs on the application's behalf. The mechanism is elegant and simple.

Figure 2 Authorization APIs

API Description
AuthzInstallSecurityEventSource Installs the specified source as a security event source. Required permissions derive from the permissions of the registry keys that the routine will manipulate (currently administrators group and local system only).
AuthzRegisterSecurityEventSource Registers a security event source with the LSA. Returns a security event source handle. Requires callers to possess SeAuditPrivilege.
AuthzReportSecurityEvent Generates a security audit for a registered security event source. Does not do any permission verification, but requires the caller to supply a valid security event source handle.
AuthzReportSecurityEventFromParams Generates a security audit for a registered security event source by using the specified array of audit parameters. Does not do any permission verification but does require the caller to supply a valid security event source handle.
AuthzUnregisterSecurityEventSource Unregisters a security event source with LSA.
AuthzUninstallSecurityEventSource Removes the specified source from the list of valid security event sources. Requires the same permissions as AuthzInstallSecurityEventSource.

First, the application registers a security event source handle with LSA by calling AuthzRegisterSecurityEventSource. The only parameter that is of interest for this API is the name of the event source, which can be almost anything, subject to a few restrictions. For instance, it cannot be named "Security" because that name is reserved for system use. The security event source handle returned by this call is used in the following steps.

Next, events are generated by calling one of two closely relat-ed APIs: AuthzReportSecurityEvent or AuthzReportSecurityEventFromParams. Finally, when the application shuts down, it unregisters the security event source handle by calling AuthzUnregisterSecurityEventSource.

One aspect of AuthzReportSecurityEventFromParams deserves special mention. The last parameter to this function is an array of parameters that will translate into insertion strings for the event message. The data types that these parameters can take are limited by the operating system to the following:

  • ULONG_PTR
  • PWSTR
  • ULONG
  • SID
  • GUID
  • AUDIT_OBJECT_TYPES
  • FILETIME

This is in contrast to the event log APIs, which allow only strings and binary data blobs. The rationale here is that the string representation of data will be controlled by the formatting routines inside of the LSA for consistency. When I discuss the managed code implementation of this functionality, you will see how this translates into the object model of the class library.

Do not be surprised if you don't see anything in the Security log after successfully invoking the auditing API. The messages will only appear in the Security log if you turn on the Audit Object Access setting. You can do this in the secpol.msc Microsoft Management Console (MMC) snap-in by selecting Local Policies, then selecting Audit Policy.

Figure 3 Auditing

Figure 3** Auditing **

The process of generating audits from user-mode applications is illustrated in Figure 3.

An important security check prevents just any security principal from generating audits: the caller of AuthzRegisterSecurityEventSource must possess the Audit privilege. Generally, the Audit privilege is held not by individual users (including administrators), but by accounts used for hosting services on the system. By default, this privilege is given only to local service and network service accounts; it can also be assigned by administrators to other service accounts that are used to run the trusted components of the system. So, for instance, if Microsoft Exchange Server wanted to generate security audits, the account used for running the Microsoft Exchange Store service would have to possess the Audit privilege. In fact, the sample code for this article can only be executed if you have the Audit privilege (even if you are the administrator, you will have to grant the Audit privilege to yourself through the MMC secpol.msc snap-in). This snap-in can also be used to enable object access auditing—without this step audits generated by the mechanism described in this article will not show up in the Security log.

This entire process requires the administrator to set up the security event source in the first place. Assuming that auditing will be performed by a particular application, the security event source would be installed by the administrator at the application's install time, and removed when the application is removed from the system. Although this installation can be performed by direct manipulation of certain registry keys, the recommended approach is to use the predefined APIs AuthzInstallSecurityEventSource and AuthzUninstallSecurityEventSource. Naturally, only administrators are allowed to call these APIs, but the caller does not require the Audit privilege in this case.

The designers of Windows made a further attempt to secure this setup. In order to obtain the Security event log handle, the caller of the AuthzRegisterSecurityEventSource API must be a process with the executable file path registered for a given event source name at install time. This mechanism prevents, for example, process A.EXE from writing to the security event source that is registered for process B.EXE. This mechanism is not perfect—with some hard work, process names can be spoofed—but it does provide an additional defense in depth safeguard. To reiterate what I said earlier, the one truly strong security measure protecting the Security log from spoofing is the Audit privilege.

Auditing from Managed Code

The .NET Framework includes a library of classes for generating and reading events. These classes reside in the System.Diagnostics namespace and wrap the regular event logging API. Because of the way the event logging classes are structured, they give a false impression that they would work for the Security log, but if you try it, you'll get an access denied exception. In this case, there is nothing you can do to get around the access check except to go with an entirely different API.

To facilitate generating audits from managed code, I created a class library that puts an object-oriented spin on the unmanaged APIs. I started by noticing that whenever an audit event is generated, the parameters for that event have a fixed structure—their types and relative order remain constant for each event ID. This reflects the basic structure of the core message and its insertion strings, suggesting the design approach that is illustrated in Figure 4.

Figure 4 Auditing from Managed Code

Figure 4** Auditing from Managed Code **

The application does its auditing through a component called an audit provider. The audit provider marshals parameters that are provided by the application and calls the unmanaged Authz API. In doing so, the audit provider consults the audit policy, which then tells the audit provider which events should be generated. The audit policy is configured by the application.

At the object model level, this design looks like Figure 5. The solution presented here consists of two base classes: AuditProvider and AuditPolicy. Both are abstract base classes, meant to be derived from by the application.

Figure 5 Object Model for Managed Code Auditing

The AuditPolicy base class has a single virtual protected method, IsEventEnabled. This method takes as its input an event ID, and returns either true or false. The audit provider uses this information to decide whether to generate or drop an event. This approach is preferable to the application internally referencing its policy at all points where auditing is desired. It also allows for creation of flexible policy management mechanisms.

The AuditProvider class has a simple constructor that takes an audit policy object and a source name. It exposes a single, protected method called ReportAudit. ReportAudit is not exposed to applications directly and is meant to be invoked by derived classes only. AuditProvider wraps system resources that need to be destroyed in order to avoid leaking handles, which is why the class also exposes the IDisposable interface.

AuditProvider needs to enable the Audit privilege in order to register a security event source with the LSA. The logic for enabling the Audit privilege uses my Privilege class implementation, which I discussed in the March 2005 issue of MSDN® Magazine (see Manipulate Privileges in Managed Code Reliably, Securely, and Efficiently). This implementation is included with the sample library that accompanies this month' s article.

Figure 5 includes two derived classes that are typical of how an application would use the auditing base classes. Normally, an application would instantiate a single ApplicationAuditProvider and use it whenever it needs to generate an audit.

Building the Resource DLL

In order for audit events to be readable in Event Viewer, a special resource DLL must be deployed alongside the application. This DLL is generated as part of the build process that compiles and links the application, but it is usually a separate module. Unlike resource DLLs used for normal event logging, audit resource DLLs are not localized.

You start building the resource DLL with an instance of a message file that has an .mc extension. This file has a well-defined format that includes a header and a list of messages, together with their event IDs. Building the resource DLL from the message file is a multi-step process.

  1. Compile the message file with Microsoft Message Compiler (mc.exe), the output of which is a header (.h) file, a binary (.bin) file, and a human-readable resource (.rc) file.
  2. Discard the header file, and pass the .rc and .bin files through the Resource Compiler (rc.exe) to produce a binary resource (.res) file.
  3. Use link.exe to link the .res file and generate the resource DLL.

Later in this article, when I build a sample auditing application, I will revisit this process with a concrete example. During application installation, the event source is associated with the resource DLL created through this process. The installation process is described in the following section.

Lifecycle Management

Auditing functionality requires additional steps during application installation. In the default Windows configuration, these steps can only be performed by the system administrator. However, these steps do not require the caller to possess the Audit privilege—that privilege is only required to generate actual audit events.

The library I developed provides an additional set of classes that facilitate the application's lifecycle management from the auditing perspective. The SecurityEventSourceInstaller class is used when installing or uninstalling the app. SecurityEventSourceNameValidator is a helper class that simply tells you whether a particular source name is valid—it encapsulates all naming rules that apply to security event source names.

To install a security event source, the following pieces of information are required:

  • Name of the security event source.
  • Full path to a resource DLL. (The installer must make sure that this resource DLL is placed in a location where it cannot be tampered with.)
  • Flag indicating whether multiple instances of the process can log under this source name simultaneously.
  • Full path to the executable file that is authorized to generate messages under this source name (optional).

Things get much simpler during uninstall time. All you need then is the name of the security event source.

Building an Auditing Application

Now that all the building blocks have been described, let's go through the steps required to turn the auditing class library into a functioning piece of code inside your server application. The sample application is a bare-bones implementation of this functionality presented in a tidy graphical interface (see Figure 6).

Figure 6 Sample Auditing Application

Figure 6** Sample Auditing Application **

This sample application is going to log several types of events:

  • Application Initialization and Termination
  • Authentication Success and Failure
  • Object Access Success and Failure

Auditing policy can be turned on or off for most types of audits. The application can also be easily extended to audit changes to the auditing policy itself. (In the real world, it helps to know when these kinds of audit changes take place.)

When a particular type of audit is enabled, you can generate that audit by entering values into the entry fields and clicking a button. The audit-generating functionality invokes the audit provider, which in turn will reference the audit policy when making a decision on whether to generate an audit.

I started by creating a message file and compiling it into the resource DLL. At this stage, as the application designer you get to decide what messages go into the Security log. Usually, the types of events you want to audit are application initialization, application shutdown, changes to audit policy, authentication successes and failures, and authorization successes and failures. You also need to decide what insertion strings to use with various messages. For instance, you should identify users by their unique identifiers (such as SIDs), not human-readable display names (since those are prone to conflicts and are not rename-safe). A sample audit message file is shown in Figure 7. With this file at hand, I have symbolic names for various messages and their message IDs. I can also reference this message file to figure out the types and number of parameters for various events.

Figure 7 Audit Message File

MessageIdTypedef=ULONG
SeverityNames=(None=0x0)
FacilityNames=(None=0x0)

MessageId=0x1
SymbolicName=ApplicationInitialization
Language=English
An application initialized successfully.%n
Instance ID:%t%1%n
.

MessageId=0x2
SymbolicName=ApplicationTermination
Language=English
An application terminated successfully.%n
Instance ID:%t%1%n
.

MessageId=0x3
SymbolicName=AuthenticationSuccess
Language=English
A user has been successfully authenticated.%n
User ID:%t%1%n
.

MessageId=0x4
SymbolicName=AuthenticationFailure
Language=English
A user has failed to authenticate.%n
User ID:%t%1%n
.

MessageId=0x5
SymbolicName=AuthorizationSuccess
Language=English
A user has been granted access to an object.%n
User ID:%t%1%n
Object Name:%t%2%n
.

MessageId=0x6
SymbolicName=AuthorizationFailure
Language=English
A user has been denied access to an object.%n
User ID:%t%1%n
Object Name:%t%2%n
.

Next, I created the application-specific audit provider and audit policy classes. The audit policy is simple: for each event ID, it contains a Boolean mapping of that event ID to a true or false value. In the audit policy implementation you can also hard-code certain restrictions. For instance, you may want to make it impossible to turn off certain types of audits, or to read the values of audit policy from a source such as the registry.

In the sample application, the class SamplePolicy subclasses AuditPolicy from the base class library. It provides public properties AuthenticationSuccessEnabled, AuthenticationFailureEnabled, and so on. In my sample implementation, application start and stop audits cannot be disabled, so there are no properties to alter their state. The state of the other four audit types can be controlled.

The sample application uses checkboxes in the main window to control the auditing policy. In practice, you can tie these to a management interface, registry notifications, or any other source. A snippet of the sample implementation is shown in Figure 8.

Figure 8 Sample AuditPolicy Implementation

public class SamplePolicy : AuditPolicy
{
    private bool[] state = new bool[6];

    public SamplePolicy()
    {
        // Initialization and termination audits are always enabled
        state[0] = true;
        state[1] = true;
    }

    public override bool IsEventEnabled( int eventId )
    {
        if (eventId < 1 || eventId > state.Length) 
            throw new ArgumentOutOfRangeException("eventId");
        lock(state) { return state[eventId-1]; }
    }

    public bool AuthenticationSuccessEnabled
    {
        get { lock(state) { return state[2]; }}
        set { lock(state) { state[2] = value; }}
    }

    // and so on for each type of policy-controlled event
}

As you can see from the object model in Figure 5, Authentication Success is event number 3. In a zero-indexed array, it is at position 2, which is the Boolean value manipulated by the AuthenticationSuccessEnabled property.

With policy out of the way, the next class to implement is a sample audit provider. The constructor is trivial—it takes a policy instance and proceeds to call the base class constructor passing three parameters:

public SampleProvider( SamplePolicy policy )
    : base( policy, "SampleApplication", LogLocation.SecurityLog )
{
}

The first parameter to the base class constructor is the policy object. The second is the name of the event source, in this case SampleApplication. The last parameter is a nice feature of my class library that allows you to generate log events in either the Security or Application log, whichever your platform allows. The sample application will attempt to write to the Security log, but you can easily change that to LogLocation.ApplicationLog and tap into the Application log instead—all without changing a line of code elsewhere in your app!

Proceeding further with the sample authorization provider, I need to define one method for each type of generated audit. The following code illustrates this process:

public void AuditApplicationInitialization( Guid instanceId )
{
    ReportAudit( 1, true, instanceId );
}

public void AuditAuthorizationFailure(string userName, 
                                      string objectName )
{
    ReportAudit( 6, false, userName, objectName );
}

The AuditApplicationInitialization method in this code snippet logs an audit event for application initialization. The parameter is a GUID, one of the allowed types of the base class library. Other allowed types include string, uint, DateTime, and UInt64, but you could also easily add support for IntPtr and (in the .NET Framework 2.0) SecurityIdentifier. The method calls the base class ReportAudit method, passing three parameters:

  • The message ID (1), which is referenced from the .mc file and indicates an application initialization message.
  • A Boolean (true) indicating that this is an AuditSuccess event. False results in AuditFailure.
  • Additional parameters—any number from 0 to AuditProvider.MaximumNumberOfParameters (32) is allowed. The only restriction is that they are of the allowed types.

With this information, you can easily decode what AuditAuthorizationFailure does.

The rest is just a matter of calling the correct method from the correct place in your code. Because of the way the interaction between audit provider and audit policy is structured, you never have to worry about whether a particular type of audit is enabled when you invoke provider methods.

The sample solution also includes a console application called installer.exe. This application will invoke the EventSourceInstaller class to set up the sample application on your machine. You must be an administrator to invoke this code, and it assumes that the sample solution was placed into the C:\Auditing directory on your hard drive. Simply run installer.exe to install the sample event source, and use the uninstall option to remove it.

Mark Novak is a Security Development Lead at Microsoft. He joined Microsoft in 1995 and has spent the last several years focusing on different aspects of software security. Mark can be reached at markpu@microsoft.com.