Best practices for an audit installation

The previous section listed out a number of factors that may affect how a audit and monitoring service system will be deployed. Below is a set of best practices that are derived from these factors. Follow these practices for planning any audit and monitoring service deployment (large or small). You can also refer to the last section of this whitepaper that discusses how to tweak settings in an existing environment to improve performance.

Plan based on concurrently audited users

When planning, always focus on the number of concurrently audited users, not just the total number of audited systems. Take into consideration user sessions that might be generated as a result of automated monitoring activity from System Monitoring and Management software, such as BMC Patrol etc.

Avoid single box deployment

Always avoid installing key components such as SQL Server, collectors and Audit Management Server on the same system, especially in environments with heavy workload. Keep in mind that a collector’s workload is CPU intensive and SQL Server’s workload is CPU, IO, and memory intensive. If both a collector and SQL Server are installed on the same system, they’ll slow each other down.

Control the amount of data

It’s always a good practice to establish rules to avoid capturing unnecessary data. This typically includes blacklisting commands such as top or tail (which generate large outputs and seldom contain any meaningful user activity) or enable per-command auditing instead of session auditing. Also, compile a list of users that do not really need to be audited and add them to the non-audited user’s list. This often includes user accounts that are used to run automated jobs from System Monitoring and Management software, such as BMC Patrol and so forth.

Scope the audit stores efficiently

Always visualize the flow of traffic, not just when audited activity is being captured but also when it’s being searched and replayed. It’s better to avoid traffic over slow links by splitting the audited systems into multiple audit stores based on their geographic location, even if it may mean that you’ll be deploying more collectors and SQL Servers. In certain cases, splitting audited systems into multiple audit stores may not be sufficient enough and you may even need to consider provisioning multiple audit and monitoring service installations. When audited data is being queried, all calls are routed to the audit store databases by way of the Management database. If the Management database is not connected to the console or to the audit store databases by way of a fast network link, the queries will always return the results slowly no matter how good the performance of SQL Server is.

Estimate storage requirement based on pilot data

No two customers are the same and you can never accurately predict how much data will be collected over a period of time in each environment. Hence, it’s important to analyze existing data in a customer’s environment (from pilot project) to predict the future data growth. A pilot testing is an effective way to help you understand a number of things such as the following factors:

  • Understand workload patterns and come up with an overall configuration strategy that determines how the audit stores will be scoped, which users should or should not be audited, which commands should be blacklisted and so forth.
  • Database storage requirement – Roughly, how much data will be collected over the retention policy period? This will also help you establish the active audit store database rotation policy.
  • What kind of hardware will be needed for the SQL Server to serve the production workload?
  • How many collectors will be needed in each audit store (this number is especially important when auditing Windows systems)?

The Centrify Audit & Monitoring Service Data Analysis tool (see KB-4496) can be very helpful to understand data trends. If the Centrify Audit & Monitoring Service Data Analysis tool reveals that more than anticipated amount of data is being captured, you can always use the database rotation to keep the active audit store database’s size in control thus controlling the storage requirements for all attached databases.

Maintain databases periodically

Apart from taking regular backups, it’s also important to keep the databases healthy by maintaining them periodically. This includes activities such as reorganizing or rebuilding indexes; these tasks must be done by a customer’s DBA periodically. Centrify recommends reorganizing indexes if they are 5% to 30% fragmented and rebuilding indexes if they are more than 30% fragmented.

Control the size of active databases

A large active audit store database often results in poor performance as a result of fragmented indexes, lengthy backups, and out of date database statistics, especially when the databases are not maintained periodically. Centrify recommends keeping the active audit store database size between 250GB-500GB (as of Suite 2016). Consider rotating databases whenever the size exceeds the recommended thresholds. You can rotate databases programmatically by using either the Centrify DirectManage SDK or the Centrify Audit PowerShell Module, or manually using the Audit Manager console). It’s also a good practice not to keep too many audit store databases attached to an audit store, because doing so affects query performance.

Plan database rotation based on retention policy

Always try to align the audit data retention policy with the active audit store database rotation. For example, if the audit data retention policy requires last 90 days of data to be online, try to rotate the active audit store database every 90 days. This strategy makes it easy to find achieved data if it’s ever needed for reviewing purpose in the future. One exception to this strategy is an environment where the audit data retention policy is so long that the active audit store database is guaranteed to exceed the recommended maximum size of the  active audit store database (as mentioned in the previous section). In such cases, you can divide the entire retention policy period into small periods (for example, one database for each month) and continue to rotate the active audit store database at the recommended intervals. Irrespective of which strategy you choose and implement, it’s always recommended to detach all audit store databases that contain data outside of the retention policy period. This not only improves the query performance but also reduces the disk usage on the database server.

Configure SQL Server optimally

Centrify recommends setting the SQL Server machine’s power plan settings (Control Panel > Power Options) to High Performance.

SQL Server has a setting called Max Server Memory that controls the maximum amount of physical memory that can be consumed by the SQL Server’s buffer pool. An incorrectly configured Max Server Memory may either result in the SQL engine causing high IO or OS/other programs starving for more memory. It’s critical to configure the Max Server memory correctly based on the amount of total physical memory available. Always configure this value as recommended before deployment begins.

Centrify recommends storing the transaction logs and data files that are associated with any SQL Server database on two separate volumes. For more information, see the Microsoft Knowledge base article https://support.microsoft.com/en-us/kb/2033523.

Other recommendations

Centrify recommends deploying at least two collectors per audit store for redundancy purpose.

Understand that any hardware has its limits

It’s entirely possible that even after following all the best practices, the Centrify Audit & Monitoring Service system continues to perform poorly. In such cases, you must consider splitting the workload by deploying additional SQL Servers or collectors, depending on where the bottleneck is. Deploying an additional SQL Server will almost always result in reconfiguring scope of the audit stores (in order to redirect some traffic to the new SQL Server) and it must be done with careful planning.