Troubleshooting Process Elevation in Privilege Manager

Oct. 12th 2016

Here are some tips when trying to discover why the process elevation feature is not working as expected.

  • Ensure that the rule has been created, has been saved and applied to a Group Policy Object (GPO). Ensure this GPO has been linked to either an OU or the domain.
  • Ensure that the Privilege Authority Client is installed on the client machine by looking in the Add/Remove Programs applet. If WMI is available, you can query the machine by dropping into a command prompt and typing “wmic /node: <fqdn of machine> product get name,version “.  If you need PowerShell, there is a great script located here.
  • From the command prompt, run ‘GPUpdate /force’ to make sure that the Group Policy has been refreshed.
  • Run ‘GPResult’ (or ‘GPResult /R’ on Windows7 or 2008), and check that the GPO the rule belongs to has been applied to that machine.  You can also use the Resultant Set of Policy (RSoP) feature or Group Policy Modeling on the Group Policy Console.  For more info, see here.
  • Check in the registry for the rule. Rules are copied to the key –

HKEY_LOCAL_MACHINE\Software\ScriptLogic Corporation\Privilege Authority\CSE\CSEHost\Host. Under this key you will see a key which is the SID for each user (i.e. S-1-5-21-15….) and then a unique GUID for each rule underneath this. To match the SID to a user account, navigate to HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and look at the data in the ProfileImagePath value or use the script provided below.

You can also create a VB Script using the following script:

Set oShell = CreateObject( “WScript.Shell” )



strComputer = “.”

Set objWMIService = GetObject(“winmgmts:\\” & strComputer & “\root\cimv2”)

Set objAccount = objWMIService.Get(“Win32_UserAccount.Name='” & User & “‘,Domain='” & UserDomain & “‘”)

DisplayString = UserDomain & “\” & User & ” = ” & objAccount.SID

Wscript.Echo DisplayString

Wscript.Echo objAccount.SID

  • If the rule is present in the registry, enable logging to troubleshoot further.

 To Enable Logging


Under the registry key HKLM\Software\ScriptLogic Corporation\Privilege Authority\ change ‘LogLevel’ from the default value of 0 to 3 and restart the ScriptLogic Privilege Authority Host Service.  The log files can be found in the folder specified in the ‘InstallPath’ value under this same key. The default log location is C:\ProgramData\Privilege Authority\Logs.

  • Run the application or target process that you have created your rule for. Then go to the log file folder (by default – C:\ProgramData\Privilege Authority\Logs) and open the CSEHostEngine.log file. Every process that is being run by the user will be displayed.  To the right of each process, you will see a “MATCH” or “NO MATCH” status indicating whether or not the process matched a given Privilege Authority rule. Then, do a search for the process that you are trying to elevate and see if there is a match or not.
Posted by bc-admin | in Uncategorized | Comments Off on Troubleshooting Process Elevation in Privilege Manager

SharePlex – Repairing a Table with Copy

Jan. 2nd 2014

This procedure is generally used only for very large tables when standard compare and repair can not be executed due to time or system resource constraints.  The instructions below will allow you to execute this procedure with the least amount of impact to the database, by minimizing the lock time required on the table.

  • Open 3 windows on the source system to expedite the process. This will enable you to minimize the amount of time the full table lock is held. With proper execution, the table lock can be less than 10 seconds.
  • In one window, create an export parameter file to export the table you want to synchronize. When complete, enter the export command but don’t hit the return key. The basic export command or data pump can be used.
    • noup / expdp parfile=exp.par &
  • In your second window, enter sqlplus from the command line. Enter the lock table command but do not hit return.
    • sqlplus / as sysdba
    • SQL>  lock table <table name> in exclusive mode;
  • In your third window, enter sp_ctrl and type in the flush command but do not hit return.
    • sp_ctrl
    • > flush <datasource>
  • You are now ready to start the procedure. All steps should be done as quickly as possible to reduce the lock time.
  1. Execute the lock table command in window two. If this times out, retry until successful.
  2. Execute flush command in window three.
  3. Start export command in window one.
  4. Return to window two and execute a commit.
  5. When export is completed, transfer dump file to target server, truncate the table, and import it.
  6. Start the post process.


Author: Mark Bochinski, Senior SharePlex DBA, LeadThem Security





Posted by bc-admin | in SharePlex | Comments Off on SharePlex – Repairing a Table with Copy

SharePlex Compare and Repair

Jan. 2nd 2014


The COMPARE and REPAIR commands are essential components of the Shareplex toolset. The COMPARE command, started on the source system, will compare one table with the corresponding table on the target. The COMPARE USING <config file name> command will compare the entire list of tables in the config file. The COMPARE command creates one log file on the source and two files on the target, one log file and one SQL file. The log file records the steps taken and errors if they occur. The SQL file contains comments plus any SQL statements needed to bring the table back in sync. However, these SQL statements are not executed. During the execution of the COMPARE command, a brief exclusive table lock is required on the source system. The table is immediately unlocked once Shareplex starts reading the table. However, on the target system the exclusive table lock is held for the duration of the compare on that table. This prevents the table from being modified during the compare. The REPAIR command works identically to the COMPARE command with the exception that it does execute the SQL statements and synchronizes the OOS (out-of-sync) table.

Before starting the COMPARE or REPAIR commands, the TEMP tablespace and the UNDO tablespace may need to be made larger. Also, the undo_retention database parameter may need to be increased. At a bare minimum, the TEMP tablespace will need to be at least as large as the largest table. Depending on the setting of SP_DEQ_THREADS (default 2), the size of the TEMP tablespace would need to be larger than the sum of bytes of the two largest tables. If SP_DEQ_THREADS is set to a larger number, increase the size of the TEMP tablespace accordingly. Similarly the UNDO tablespace may need to be increased. Based on transaction volume and the length of time it takes to compare the largest table increase the size of the UNDO tablespace and increase the undo_retention database parameter to avoid an ORA-1555 Snapshot too old error. Tables with LOBs take much longer to compare or repair than tables without them.

The Shareplex COMPARE and REPAIR commands work as follows. After locking the table, the table is read and sorted in identical fashion on both source and target. If the table is large, it will probably need to be sorted in the TEMP tablespace. As this writes to disk, it will take longer than if it was sorted in RAM. Next, 10000 rows are read on the source and target systems, a UNIX check sum is performed. If the check sums match, the next 10000 rows are read, etc. If the check sums do not match, the COMPARE and REPAIR processes determine which rows are out of synch and creates the SQL statements to repair them. The REPAIR process executes the SQL statements.

Commonly modified COMPARE/REPAIR parameters

SP_DEQ_BATCHSIZE – Default 10000.This parameter determines how many rows are read on source and target before executing the UNIX check sum command. Larger batch sizes increase the processing speed but require more RAM. The range of values is from 1 to 32767.

SP_DEQ_THREADS – Default 2. This parameter controls the number of parallel compare or repair processes. It only impacts the COMPARE USING <config file name> command. A common occurrence when this parameter is set to a high value is multiple large tables comparing at once. If the database has 1000 tables in replication and 20 of them are large, Shareplex will quickly compare the small tables while the large tables will take longer as they sort to the TEMP tablespace. Eventually, many large tables could be comparing at the same time. This can cause a huge load on the OS. Setting SP_DEQ_THREADS larger than the number of available CPUs is unadvisable.

SP_DEQ_SKIP_LOB – Default 1. The default value causes LOBs to be included in the compare/repair process. Setting it to 0 will cause only the non-LOB columns to be included in the compare repair process. This will greatly speed up comparing or repairing LOB tables, especially useful if the LOB columns are never modified after insert.


Author: Mark Bochinski, LeadThem Security Senior SharePlex DBA





Posted by bc-admin | in SharePlex | Comments Off on SharePlex Compare and Repair

Reusing Functions in ARS Scripting

Nov. 20th 2013

Sometimes in scripting, I tend to reuse the same bit of code over and over again and it can cause the script being written to become overly large, or larger than it really has to be.  This is where powershell functions come into play.


A function is nothing more than a reusable bit of code that is referenced by a name and possibly some variables that are passed into the function.


An example of this is:


function HelloWorld {


write-host “Hello to the World!”




And in the script, it would be called by simply referencing the name




The function may be included in the body of a script and called directly and all works as expected, but what can we do if the function is utilized across multiple scripts without re-inventing the wheel each time?  Well, we can place the function(s) in a PowerShell script and just include that script or reference the function script within the new script.  In PowerShell this is called dot sourcing and is formatted like so:


. c:\scripts\include.ps1


Note:  The format is a period, space, and then the path to the script to include.  When dot-sourcing a script, all variables and functions are available to the console after the script ends.


So, now we have the basics of that out of the way, the same idea can be used in Quest Active Roles Server (ARS).  Script modules within ARS can be created for a multitude of operations:


  • onPreCreate
  • onPostCreate
  • onPreDelete
  • onPostDelete
  • onPreModify
  • onPostModify
  • onPreMove
  • onPostMove
  • onPreRename
  • onPostRename
  • onPreGet
  • onPostGet
  • onPreDeprovision
  • onPostDeprovision
  • onPreUnDeprovision
  • onPostUnDeprovision


Let’s just say, for all of the operations above, you had a need to write the date and time to the description field if the object was a type of User.  This could be a requirement for and easy to tell when the user object was last touched by the automation of ARS.


The scenario above would be good use of a function and including this function with all ARS Policy scripts, then it could be called from all scripts.


To accomplish this, first, we need to create a Library script within ARS.  A Library script is the only type of script in ARS that may be referenced from a policy script.  In the ARS console, expand Configuration, then Script Modules.  From here the Library script can be created, or a new Script Container can be made to house and organize the custom modules.



I have a custom scripts container where I place all of my created scripts (Library and Policy).  Right click where you want to create a script, and select New->Script Module.  This will open the Wizard to create a new Script Module object.



Enter a name for the new library, select your language of choice (this has been written around using PowerShell), enter a Description if so desired, and click Next.



This page is where we select Library script.  As you can see by the explanation, this is exactly what we are looking for.  Click Next.



The last page is just a confirmation, click Finish to create the new Library.

In order to add functions to the new Library, locate it within the tree, click on it (you will see the right pane is blank), and either click the icon b1or press F4 to edit the script.

Now, we create our function.  I will just show the finished code below:

Function SetUserDescription {

If ($Request.Class –eq “User”) {

$dn = $DirObj.Get(“distinguishedName”)

$Date = Get-Date -Format “yyyyMMdd-HHmmss”

$Description = “User Modified by ARS on “ + $Date

Set-QADUser $dn –Description $Description



The code above checks the class, grabs the users DN, grabs the Date, builds the description text, and then sets the users description.

Save the Library script by clicking the icon b2 or by pressing Alt-S

We have a script Library with our function saved in it, now to use it within a policy script.

A new Policy script can be created using the previous steps for the Library script, just select Policy Script in the second step.

Follow the Library procedures for editing the script.

Now we are in editing mode, let’s include our Library.  Click the icon b3 or press Alt-L.



This dialog appears, and all that is required is to expand the container that the Library exists in and check the box next to the Library and click OK.  Note:  Multiple Libraries may be included at the same time with this dialog.

After the OK button is clicked, code will appear at the top of your edit window that looks like the following:

function onInit($Context)


$Context.UseLibraryScript(“Script Modules/Custom Scripts/TestLibrary”)


With this included now, you may reference any function that exists within that Library.  In our example, you would just reference SetUserDescription in order to call that function.

function OnPreMove($Request)


***Do some stuff ***



The above will execute the SetUserDescription function from the included Library script before a User is moved but after ‘stuff’ script commands execute.

Once your Library is built up and assigned to Policy Scripts, you are ready to assign these Policy scripts to Provisioning Policies, Deprovisioning Policies, or Workflows within ARS.


Author: Russ Burden, Technical Architect, LeadThem Security





Posted by bc-admin | in ARS | Comments Off on Reusing Functions in ARS Scripting

Enabling Active Directory User Authentication in TPAM

Nov. 14th 2013


The TPAM appliance can utilize External Authentication Sources to permit user access to the TPAM interface for management of the appliance, as an auditor, or just as a normal user wanting to request a password or session from the system.  This article will be focused on allowing users within TPAM to access the appliance with their Active Directory credentials.

First we need to gather some information about the environment and the user(s) that need to authenticate using AD credentials.

For my test lab, below are the requirements:

Domain Name(FQDN): target.local

User Information:

First Name: John

Last Name: Smith

LoginID (sAMAccountName): JSMITH


1)      Log into your appliances’ /admin interface with an administrator user (default is parmaster)

2)      Move the mouse to System Status/Setting, move down to External Authentication, then click on WinAD. (The TPAM interface does not require clicking on each option, these are mouse-over activated menus, and only the final option requires a mouse click)


3)      Within the WinAD Window, Click New System

4)      Enter the System Name, this is a TPAM only reference name and can be anything you want.

5)      Enter Server Address, this can be an IP, a DC FQDN, or preferably, if DNS is working properly, the FQDN of the AD Domain Name.

6)      (Optional) Change the Timeout, the default is 4 hours, maximum is 8 hours.

7)      Click Save Changes and your newly configured System Name appears in the Configured Systems.


At this point, TPAM has a configured external system to authenticate against, but the user account within TPAM must be configured to utilize this External Authentication source.  For this example, I will add a new user to TPAM and configure the user to login with its AD credentials.

8)      Login to your appliances’ /tpam interface with an administrator user (default is paradmin)

9)      Move the mouse to Users & Groups, move to UserIDs, and click on Add UserID.

(The TPAM interface does not require clicking on each option, these are mouse-over activated menus, and only the final option requires a mouse click)







2)      Once in the New UserID windows, you will notice a multitude of fields available.  Note the fields with a red asterisk (*).  These fields are mandatory when creating a user within TPAM.



3)      Fill in the User Name field, this is the TPAM User Name and does not necessarily need to be the same as the AD login id, but to cut down on confusion, it is recommended to make them the same.

4)      Fill in the users first and last name in the fields provided.

5)      Fill in any other fields desired, but are not required.

6)      Select the User Type.  The default is Basic and this is what the majority of TPAM users should be configured as.  This selector is where you can define a user as an Auditor, Administrator, etc..

7)      At the bottom of the page is where we define how this user will authenticate to TPAM.  The default is for any new user to authenticate locally to TPAM, and that is what the password fields are for.  If you enter and confirm the password in these boxes, and save the user, this user will authenticate locally to the TPAM appliance.

8)      The User Authentication section right below the Password is how you define this user to utilize the External Authentication we created earlier.

9)      On the Primary, Click on the drop down next to Local. Select WinAD.  When this is done, notice the Select a System drop down enables and the password and confirm boxes go away.

10)   Click the drop down for Select a system, and select the External Authentication that was defined earlier.

11)   In the UserID box, enter the users AD sAMAccountName or UPN (if desired).

12)   Your new user window should look similar to the image below.  Click Save Changes.



Now you have an External Authentication source and a new user configured to utilize that source.  Now we just need to test it.

1)      Log Out of TPAM and close all browser windows

2)      Launch your browser and open the /tpam interface and Login with your new user.

3)      If everything was configured properly and Active Directory is reachable, then your new user will receive the TPAM page below: (Note the user ID in the upper right corner)


Notice that this window is pretty sparse, since this was a new user created only for demoing External Authentication, it has no access to anything else in the system, but that is for another day.


Author: Russ Burden LeadThem Security Architect







Posted by bc-admin | in TPAM | Comments Off on Enabling Active Directory User Authentication in TPAM