Home » Posts tagged 'Log Parser Studio'

Tag Archives: Log Parser Studio

Log Parser Studio – Write your first query in less than 30 seconds

Though LPS contains over 170 preconfigured queries out-of-the-box there almost always comes a time when you have a question you like to get an answer for that an existing query doesn’t quite cover. There are couple of things we can do, we can modify an existing query that is close to what we want or we can write our own. I often hear the following when speaking about LPS with others:

I love using LPS but I don’t know the first thing about writing SQL queries, SQL queries are complicated and difficult to write.

It’s true that SQL queries can get complicated but Log Parser query syntax is only a subset of the SQL you are used to hearing about so the good news is that you don’t need to be a SQL DBA or anything near that to begin writing simple queries that will help you get answers to your questions. So, how can you write your first query in less than 30 seconds? Just click New Query in LPS, select a log type and click Run! OK, so LPS wrote it for you but the default new query is there both as a starting point and the simplest example of a query that pulls the first ten records found in the logs:


Let’s dissect the query above to better understand what is going on. Once you have this basic understanding you’ll soon see that it really isn’t that hard to get started. I will be explaining these in layman ‘Let’s get some work done” terms where LPS is concerned only. This isn’t a formal SQL course. :)


We can always think of the SELECT statement as “Which columns from the log do I want to included in the results“. SELECT will always appear as the very first statement in every query followed by the list of columns you wish to included. Let’s assume a theoretical log file with the following columns: date, time, user, status, duration and show some examples. Actually let’s create our own log file and use that in this tutorial. Begin by pasting the following text into notepad then save as “TestLog.CSV”:

6/4/2013,12:00:04,User 1,OK,123
6/4/2013,12:00:05,User 3,OK,516
6/4/2013,12:05:31,User 3,ERROR,731
6/4/2013,12:10:50,User 1,OK,826
6/4/2013,12:10:52,User 1,PENDING,154
6/4/2013,12:15:12,User 2,ERROR,213
6/4/2013,1:23:32,User 2,OK,22
6/4/2013,1:24:00,User 1,ERROR,546
6/4/2013,1:39:21,User 2,OK,987
6/4/2013,1:40:01,User 3,PENDING,265

Once saved as CSV, open the Log File Manager in LPS, choose “Add File”, browse to and select Testlog.CSV. Create a new query in LPS and change the log file type to CSVLOG. Now that you are setup and ready, here are the examples that you can run against that log along with explanations as to how they work:

SELECT date, time, user FROM ‘[LOGFILEPATH]’ 

Display every record but only include the date, time and user columns in the result.


Display every record but only include the user and status columns.


Display every record including all columns (the asterisk is a wildcard for column names). You probably never want to use this exact query as it will return every single record which would be the same as just opening the file.


Display only the first 5 records and only include the user and status columns. Notice that field names are always separated by commas with no comma between the last field and the FROM statement. As an FYI, it is a common mistake to place that extra comma after the last field and the query will throw an error similar to “Expecting FROM statement” or similar when this happens. Another good thing to know is that queries are evaluated for validity from left-to-right, top-to-bottom so many times the seemingly cryptic error you may encounter when attempting to run a query containing a mistake will give a clue as to where the issue is.


TOP is always used to restrict the number of records returned and always follows the SELECT statement. It doesn’t need much explanation but I’m including it because when writing queries the first thing you’ll likely do is write a query that returns too many records and the whole idea of queries is to isolate what you need from the noise. TOP is a great way to protect your queries from information overload by restricting the total records returned but is purely optional where SELECT is not optional.


FROM means just what it says. “From what log file or resource will I be querying?” AKA the path to the log file(s). In LPS we typically use the placeholder token ‘[LOGFILEPATH]’ instead of real filenames. This is swapped out with the real file names we chose in file manager when the query runs. As you can see we really only need SELECT, some field names and FROM to write the simplest queries from scratch; it really is that simple, but wait there’s more. Just pulling records from a log as-is isn’t much different than opening the file and looking at it, this is where the WHERE statement comes in.


WHERE is our workhorse for extracting only what we care about and is used to search our logs for specific data via equality matching. Continuing with our theoretical log file above what if we wanted to only return records that contains user1? The following returns all records where the user field is User1:

WHERE user LIKE ‘User1′

Notice the single quotes which is the default for string searches. Secondly notice the LIKE statement which gets a lot of use along with the WHERE statement. LIKE can also use wildcards which is the preferred in many cases. The wildcard is LPS is the % sign. Here are some additional examples using WHERE and LIKE:

WHERE user LIKE ‘User1%

Find all records and display all columns for every record where the user field begins with User1. Notice the % wildcard is at the end of the string.

WHERE user LIKE ‘%User1%

Find all records and display all columns for every record where the User1 appears anywhere in the user field. Wildcard is at beginning and end of string.

SELECT user, time, status FROM ‘[LOGFILEPATH]’
WHERE user LIKE ‘%User1%’
AND status LIKE ‘%Error%’

Find all records where the User1 appears anywhere in the user field AND where the status field contains the word error; display only the user, time and status fields. I threw in the AND statement to show that you can expand your criteria and narrow results easily. What if we wanted to look for errors for User1 OR User2?

SELECT user, time, status FROM ‘[LOGFILEPATH]’
WHERE (user LIKE ‘%User1%’ OR user LIKE ‘%User2%‘)
AND status LIKE%Error%

If your remember your math classes from high school you might remember the term “Operator Precedence” and it is being employed above because SQL queries observe a similar logic. In other words what is contained inside the parentheses is evaluated first, otherwise our query might return unexpected results when attempting to evaluate the OR between users and the AND for finding errors. When using AND/OR together you want to think about this so that your queries do what you expect them to do.


The ORDER BY clause is also somewhat explanatory. It allows you to sort the records based on the fields you wish in either ascending or descending order. Here are a few examples with ORDER BY and expanding upon our progress:

SELECT user, time, status FROM ‘[LOGFILEPATH]’
WHERE (user LIKE ‘%User1%’ OR user LIKE ‘%User2%’)
AND status LIKE ‘%Error%’

The above finds all records where User1 or User2 are in the user field and the status field contains the word error then sorts the results by user name in ascending order. ASC is not actually required and its omission automatically defaults in ascending order. Our two choices are ASC (ascending) and DESC (descending). If you want descending order you do need to include DESC since the default is ASC. I typically always include ASC because it makes the query more self-explanatory for others. You can also sort based on multiple fields using commas:

SELECT user, time, status FROM ‘[LOGFILEPATH]’
WHERE (user LIKE ‘%User1%’ OR user LIKE ‘%User2%’)
AND status LIKE ‘%Error%’
ORDER BY time, user ASC


AS is can be used to create aliases for field names. Field/column names in logs can sometimes be less than desirable for the story we want to tell. Maybe it’s an IIS log and we would prefer “cs(user-agent)” display as “Client Device” instead because it makes more sense for the query/report we are going to generate. AS is extremely simple to use, simply place “AS Alias” directly after the field you wish to rename. Below we are using our same example log to rename the fields to our liking:

SELECT user AS [User Name], time AS TimeLogged, status AS Result
WHERE (user LIKE ‘%User1%’ OR user LIKE ‘%User2%’)
AND Result LIKE ‘%Error%’
ORDER BY time, user ASC

I exploited a few features above, let me explain. The brackets allow us to use spaces in our alias so that the user column will display as “User Name”. Once the aliases are set they can also be used in the query itself so long as they are referenced after the alias is created. Notice how I used “Result” with the AND operator instead of the real field name of “status”. I use aliases all the time as they are wonderful for converting what may be cryptic looking field names into something stakeholders better understand. Queries are about telling a story and aliases are a great tool for presenting results and reports that make sense to those you are delivering them to.

Field Names

When learning to write queries the first hurdle you may encounter is knowing the names of the actual fields. You have a 3GB log file and opening it up just to find field names isn’t going to be very productive. For IIS logs it’s easy in LPS, just click F3 and a window will popup that lists all default IIS fields. You can select the ones you care about and hit enter and they will be copied to the clipboard (with the commas) and you can just paste them into your query. :)

For other logs you can actually write a query to give you the field names that you can then use as a reference in your new query. Here’s how I do it and there are actually some built-in sample queries that already perform this useful trick:


You may be wondering why we want to run a query that returns zero records; well, we don’t want records we want field names and the query above will do just that provided you have chosen the correct log file type. Just run the query and you will see the grid populate with column names only and no records. Now, right-click anywhere inside the empty grid and choose copy which will copy the field names to your clipboard so you can paste them and reference as you write your query. One thing I often do is save my “find field name” query and include the fieldnames as a comment in the actual query for future use:


/* Field Names: date, time, user, status, duration */

If you aren’t code savy, /* and */ are opening and closing tokens for comments. Anything between these are considered comments and are not evaluated as part of the query. Keep this in mind because you can document your queries with useful information which is exactly what we are doing here.


Hopefully this quick crash course in creating queries will get you up and running with creating your own queries. Though the built in queries are extremely useful, the real power of LPS is in modifying and writing your own because every situation is different. Take some time to study the above an see if you can write a few simple queries for your own logs in your environment. it doesn’t take long to get a basic grasp on writing basic queries. Stay tuned for part II which will dig a little deeper into query creation and different methods of getting at the answers you need.

Source: https://blogs.technet.microsoft.com/karywa/2013/06/05/log-parser-studio-write-your-first-query-in-less-than-30-seconds-part-i/

Getting Started with Log Parser Studio – Part 3

This installment of the Getting Started series is a walkthrough of the query window/editor. This is where you will likely spend most of your time, whether you are simply running queries you have chosen from the library or editing, testing and saving your own queries. Let’s start by taking a look at it by double-clicking any query in the library:

Tab Name
The generic name of the tab the query is contained within. To save space (query names would be much too long) these are named sequentially as Q1, Q2, Q3 and so on. You can rename these as needed by right-clicking any tab and choosing “Rename tab”.

Show/Hide Query
Shows or hides the bottom query window. This is handy when you are not concerned with making any changes to the query and just want to work with the results grid. The results grid is the top half of the window.

Lock/Unlock Query
Queries loaded from the library are read-only by default. This is a safety net to prevent accidental changes to the query. However, many times you may want to tweak the query and this orange button unlocks it.

Elapsed Time
After the query has been executed this timer displays how long the query has been running (if it hasn’t completed) and the total time the query took to complete if it has completed.

Log Type
Log Parser Studio can query multiple log file types such as IIS, CSV, TXT and so on. These are saved with each query but can be changed/selected here. Queries almost always need to match the logs that are being queried.

Input Params
For each Log Type there are multiple parameters that can be set. Typically you won’t need to change any of these but if you are an advanced Log Parser 2.2 user you’ll be able to make granular changes to how the logs are queried. Warning: Changing these params without knowing exactly what you are doing can cause perfectly good queries to fail and display errors. If for some reason this happens you can easily reset the parameters to their defaults by clicking the reset button as shown below. Additionally, the available params are different for each type of input format (log type). There is also an option in Options > Preferences entitled “Reset all log formats” which will reset all format parameters (both input and output) to their LP 2.2 defaults:

Query Name The full name of the query. If it is a new query it will be “New Query”. If the query has been modified it will contain an asterisk in the name: *my modified query

Query Window The actual query itself. Queries follow basic SQL format which will be explained in subsequent posts.

The Results Grid

There is one result grid per query tab. This is where the query results are returned when the query returns the results to LPS instead of a CSV file. The columns are all dynamic and depend on both the query and the fieldnames in the log you are querying. Once the results are returned you can perform several actions:

  1. Select and highlight cells with colors by right-clicking the selection. These will be lost if you sort or search the grid as an FYI and are only for quick marking of interesting rows/cells.
  2. Copy selected rows/cells to the Windows clipboard so that you can paste them elsewhere. Hint: Holding CTRL+SHIFT when right-clicking or CTRL+SHIFT+C will copy the cell values only and not the column names. If you exclude this combination you’ll also copy the column names to the clipboard.
  3. Expand/collapse the selected column. Column widths are a default size and do not expand to the width of the contents. Expand/collapse toggles this when you need to see the entire line. The F4 keyboard shortcut also does this for you. Press CTRL+K to see a list of all keyboard shortcuts since there are quite a few of them!
  4. Search/drilldown existing results. The search box in the top-right of the LPS interface is contextual. Meaning if you are viewing the library, it searches the library, if you are viewing query results it searches within those results. It also searches drill down style meaning each time you search, it only searches the visible rows. This is an important distinction if you have 5000 rows and search for all records containing the word Error which results in 100 rows being found, a new search for the word Failed will only search the latest result. This is very handy for digging into the data without having to run a new query. To display the original full result set, just click the X button by the search box and the original results will appear. Additionally, any grid that is showing searched results will show bluish colored text and the tab will contain a dot by the tab name so that you know it actually contains more records than currently visible. This is also reflected in the chart.

Below we have a query which contained 10557 rows that is now being searched for “logon.aspx” in which were are 250 matches. Also notice I have collapsed the query window since I’m done with the query for now and just want to work with the results.

Actions Bar

Since the actions bar gets most of its use when working with queries let’s display and describe them. To save space they are icons instead of text so the meanings may not be immediately recognizable. When in doubt you can always hold your mouse over a button and it will display its underlying action:

New Query – Creates a new query with the default settings and a log type of NOTSET. You’ll need to choose a log type to match the logs you wish to query.

Run Query – This is where the magic happens. Clicking this button will attempt to execute the visible query immediately.

Save Query – Whether a query of your own design or a modified query from the library this button saves the focused query into the library. If you have modified an existing query and want to keep the original you’ll want to change the name of the query in the save dialog:

Make the necessary changes above then click save. You will be warned if this action will overwrite an existing query. You may assign a category but be aware that categories may be deprecated in the future. Lastly, if you didn’t choose the log type, surely you did and tested the query before saving :) you can set it in the log type drop-down menu above. Once the query is saved it will now show up in the library for future use.

Export CSV – Exports the results of a completed query to CSV format. This simply takes the existing columns and rows and creates a CSV file that you can save to your chosen location.

Note: You can also export to CSV as part of the query itself using the INTO statement leaving the query window only displaying the path to the file that was created. More on this will be covered later. However, here is a quick screenshot of what this looks like so you will be aware. Notice the filename is in blue text and is a link to the actual file. In other words if you click it, the file will open in the default CSV editor. Additionally, there is an option in preferences to auto-open CSV queries when the query completes:

Notice the INTO ‘[OUTFILEPATH]\CountHits.CSV’ line above. This is how you can direct any query you wish to CSV instead of the query window. This is actually a very important concept because the underlying Log Parser 2.2 component is 32bit only. That being the case queries that return very large results can deplete Log Parser Studio’s memory when attempting to build the grid that contains all those results regardless of how much memory you have installed on your machine. While we are on the subject there are a few of simple rules we can follow to help avoid this situation:

  1. Write smart queries. Smart well designed queries rarely return 100,000 rows. In reality a great query that gets you to the answer quickly usually returns a much smaller number of records. That’s what queries are for, to tell a story and return the answers you need. If your query is returning too many records you may not be accomplishing much if you now have to wade through 100,000 entrys.
  2. Use the TOP keyword when writing queries and or existing queries if needed. Notice the above query contains “TOP 100″ which limits the results to 100 rows regardless of how many matches were found. Use this if you know the answer you are looking for will likely exist in the “top” returned rows. It can be used in most any query and you can set TOP to any number you wish. Many queries use this for this exact reason. A good example might be a query that returns the TOP 20 users where records beyond 20 aren’t important.
  3. Redirect the results to a CSV file as part of the query which bypasses having to process and build the result grid in LPS altogether. There are two distinct advantages to this. 1) You won’t run the risk of running out of memory and 2) the query will finish quicker because there is no need to process all the records and build the grid. See INPUT statement that was mentioned above.

Log Files – This button opens the Log File Manager where you browse for and choose the logs you wish to query. See Getting Started with Log Parser Studio Part I on how to use the Log File Manager.

Run Batch – A batch is a collection of queries that can all be executed at once. You add queries to the batch from the library using right-click > Add to Batch. Since LPS is multi-threaded, all queries will run in the background and you can continue to work on other queries, browse the library etc.

Show Batch – Opens the batch manager window. You can review the queries in batch list as well as execute them directly and save the batch for later use. If you go through all the trouble to choose a batch of queries you run regularly, save some time and save them as a batch. You can return to the batch manager later, load them and run them.

Show Chart – Many queries can be charted. Here is the rule: The chart always gets its axes from the first two columns in the results andthe second column must always be some type of number. However, you can drag and drop columns. So if you have a query that for example has a list of dates in column one then ten additional columns with numbers, you can simply drag different number based columns to the column two position and generate a new chart based on those numbers! Charts can also be exported as images, copied to the clipboard, customized, zoomed etc. More on charts in later posts:

Above we have a query that counts the number of hits to an Exchange CAS server (OWA, EWS, ActiveSync) quantized to 5 minute intervals showing its chart using the charting feature. Charting is a great way to present your findings in a meaningful way to others. Here is the same chart again in a different format:

Abort – The abort button will attempt to abort the visible running query immediately. However, there are lots of moving parts in a multithreaded, multicomponent environment. Once Log Parser 2.2 has the query in the background there isn’t much control and it has to do its thing to a point. Basically this means that much of the time the abort button will at least stop the query the first time LP 2.2 calls back to LPS on the background thread. However, in some cases such as using INTO to send the output to CSV LP 2.2 doesn’t call back to LPS until the entire query is complete.

Even with these caveats aborting queries has advantages because it clears up the grid so you can make changes and rerun the query, it saves potential memory pressure because the query won’t be returned in its entirety etc. In queries that are returned to the grid, if the rows are already being built in the background, only the rows that were actually processed will be returned to the grid. Use the abort button when needed, especially if you just kicked off a query and realized its going to run way too long and return way too many records but don’t expect it to be a panacea.

Lastly, when a query has already returned results (IE: the query has finished) the abort button simply clears the grid of any existing rows. So this button actually serves two functions, clear the results grid or abort a running query.

Output – When using the INTO statement to redirect query results to a CSV or other file type. The output directory is where they always go. Click this button to quickly take you to that directory. The query log feature which will be discussed later also sends its output to this directory. The default output directory can be changed at any time in preferences.

Export PowerShell – This button creates a standalone PowerShell script of the selected query (whether selected in the library or the currently focused query tab) . All current settings are exported including input and output query parameters. The PowerShell script does not require Log Parser Studio to execute successfully. It only requires Log Parser 2.2 to be installed on the computer you run the script from. Note that since the script could potentially be executed on machines that do not have Log Parser Studio installed the following changes are automatically made to the script on export:

  1. All output is by default sent to the My Document folder for the logged in user of the machine they are running the script on.
  2. Any file tokens are replaced with the actual names. For example ‘[LOGFILEPATH]’ will be replaced with ‘file1.log, file2.log’ etc.
  3. You cannot change input/output parameters post export unless you are savy enough to modify the script itself.

There are also are four parameters you can use to modify some behavior when running the actual script:

  1. -AutoOpen $true/$false – When $true the export file will be opened in the default viewer for the file type. Example: CSV opening in Excel. The default value is $false.
  2. -OutFIle Filename – Use this to send the CSV file for example to a different location than what was originally indicated in the original query.
  3. -IgnoreInParams $true/$false – Ignore any custom log type input parameters that were set in LPS and use the LP 2.2 defaults instead. Default is $false. 
  4. -IgnoreOutParams $true/$false – Ignore any custom log type output parameters that were set in LPS and use the LP 2.2 defaults instead. Default is $false.

Here is an example of how to run the query we previously used in the chart example above exported as a PowerShell script. All parameters above are optional. I’m including below just so you can see how they are used when needed:

.\VDirHits.PS1 -AutoOpen $true -OutFile C:\Temp\VDirHits.CSV -IgnoreInParams $false -IgnoreOutParams $false

And finally the query in action:

In this post we covered query window basics along with the actions toolbar going a little more in-depth on what each button does. We learned that smart queries usually don’t return very large numbers of rows but if they do we should use the INTO statement to redirect the results to a CSV file. We can abort queries and we can search query results, highlight rows/cells of interest, copy to the clipboard, chart and even export as a PowerShell script for other purposes.

As you can see there is a LOT that can be done with LPS queries and they can be of great help when investigating logs and telling a story. The amazing thing is there is actually a lot more here to know and learn that will greatly expand your power to get at the data you need to get at and present it well including a short course in writing your own queries. Stay tuned because all of it is on the way in future blogs…

Source: https://blogs.technet.microsoft.com/karywa/2013/05/14/getting-started-with-log-parser-studio-part-3/

Introducing: Log Parser Studio

Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time.

Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries.

With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols.

Processing Logs for Exchange Protocols

We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click!

Let’s get into the specifics of some of the cool features of Log Parser Studio

Query Library and Management

Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later.


Library Recovery

The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS.


Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation).


Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application.


The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key!

Batch Jobs and Multithreading

You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries.

A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once.


Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed.


In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual.


Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e.

  1. The first column in the grid may be any data type (string, number etc.)
  2. The second column must be some type of number (Integer, Double, Decimal), Strings are not allowed

Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual.


Keyboard Shortcuts/Commands

There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows:

What it does

Start a new query.

Save active query in library or query tab depending on which has focus.

Open library window.

Add selected query in library to batch.

Open Batch Manager.

Add the selected queries to batch.

Duplicates the current active query to a new tab.

Open the error log if one exists.

Export current selected query results to CSV.

Add selected query in library to the favorites list.

Open the raw Library in the first available text editor.

Reload the Library from disk.

Execute active query.

Edit name/description of currently selected query in the Library.

Display the list of IIS fields.

Supported Input and Output types

Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates.

Supported Input Types

Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install.

Supported Output Types

CSV and TXT are the currently supported output file types.

Log Parser Studio – Quick Start Guide

Want to skip all the details & just run some queries right now? Start here …

The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved.

1. Setup your default CSV output path:

a. Go to LPS | Options | Preferences | Default Output Path.

b. Browse to and select the folder you would like to use for exported results.

c. Click Apply.

d. Any queries that export CSV files will now be saved in this folder.
NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that you move this to another location.

2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched.

You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar.

NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query.

3. Choose a query from the library and run it:

a. Click the Library tab if it isn’t already selected.

b. Choose a query in the list and double-click it. This will open the query in its own tab.

c. Click the Run Single Query button to execute the query

The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV.

As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual.

Have fun with Log Parser Studio! & always remember – There’s a query for that!

Source: http://blogs.technet.com/b/exchange/archive/2012/03/07/introducing-log-parser-studio.aspx

Log Parser Studio Report Collection – Troubleshooting Exchange 2013 Performance

In Exchange Server 2013, all traffic must go through IIS, so analyzing IIS logs is a great way to get an overview of the number of connections that are hitting a server, the protocol specific information, and the users impacting performance. This zip file contains over 20 new reports for Log Parser Studio (LPS) for troubleshooting Exchange 2013 Performance issues.

Download Log Parser Studio from here and review the LPS blog series from Kary Wall, Getting Started with Log Parser Studio.

Report Descriptions:

  • IIS: Requests Per Hour – Point this query to IIS logs from either the Default Web Site (W3SVC1 directory) or the Backend Website (W3SVC2 directory), but not both at the same time.
  • ACTIVESYNC_WP: Clients by percent – Calculates all ActiveSync requests broken down by user-agent and percentage of each client to the total number of requests.
  • ACTIVESYNC_WP: Requests per hour (CSV) – Lists the ActiveSync requests per hour and sends the results to a CSV file.

  • ACTIVESYNC_WP: Requests per user (CSV) – Lists ActiveSync requests per user and sends the results to a CSV file.

  • ACTIVESYNC_WP: Requests per user (Top 10k) – Lists ActiveSync requests per user for the top 10,000 users

  • ACTIVESYNC_WP: Top Talkers (CSV) – Lists the top ActiveSync clients from highest to lowest request count and sends the result to a CSV file.
  • EWS_WP: Clients by percent – Calculates all EWS requests broken down by user-agent and percentage of each client to the total number of requests.
  • EWS_WP: Requests per hour (CSV) – Lists the total number of EWS requests per hour.
  • EWS_WP: Requests per user (CSV) – Lists EWS requests per user and sends the results to a CSV file.
  • EWS_WP: Requests per user (Top 10k) – Lists EWS requests per user for the top 10,000 users.
  • EWS_WP: Top Talkers (CSV) – Lists the top EWS clients from highest to lowest request count.
  • EWS_WP: Lync GetUserPhoto requests – All Lync requests for GetUserPhoto listed in descending order.
  • EWS_WP: Mac Mail User Requests by user/request – Mac Mail requests by user and number of requests sorted descending.
  • EWS_WP: Mac Outlook User Requests by client/request – MacOutlook User Requests by client/request from highest to lowest hits.
  • EWS_WP: Non Outlook or Mac Requests – Lists all EWS requests that are not generated by either Outlook or Mac ordered by highest to lowest.
  • OLA_WP: Errors, per user, per hour, per day – Outlook Anywhere users by number of requests.
  • OLA_WP: Requests per hour – Lists the Outlook Anywhere requests per hour.
  • OLA_WP: Requests per hour, per user – Lists the Outlook Anywhere requests per hour, per user.
  • OLA_WP: Requests per user (CSV) – Lists Outlook Anywhere requests per user.
  • OLA_WP: Requests per user (Top 10k) – Lists Outlook Anywhere requests per user for the top 10,000 users.
  • OLA_WP: Top Talkers – Lists the top Outlook Anywhere clients from highest to lowest request count.
  • OWA_WP:  Clients by percent – Calculates all OWA requests broken down by user-agent and percentage of each client to the total number of requests.
  • OWA_WP: Requests per hour (CSV) – Lists the OWA requests per hour and sends the results to a CSV file.
  • OWA_WP: Requests per user (CSV) – Lists OWA requests per user and sends the results to a CSV file.
  • OWA_WP: Requests per user (Top 10k) – Lists OWA requests per user for the top 10,000 users.
  • OWA_WP: Top Talkers (CSV) – Lists the top OWA clients from highest to lowest request count and sends the result to a CSV file.



Source: https://gallery.technet.microsoft.com/office/Log-Parser-Studio-Report-f17f6b8b

Getting Started with Log Parser Studio – Part 2

In my last post, Getting Started with Log Parser Studio – Part 1, I showed how to get Log Parser Studio along with its minimal prerequisites installed, basic setup as well as running your first query. In this post I’ll be taking you on a basic “getting around town” tour to help familiarize you with the LPS Query Library and managing queries. To kick things off let’s take a quick look at the library.

Working with the library

Above we see the library that holds all the queries. It’s fairly self-explanatory that its a list of all the queries that LPS manages along with a description, date modified, type of query and the query itself (all of these are not visible in the image above). All queries are prefixed with the basic category they reside in. For example a query that queries log files for an IIS website will appear in the following format: IIS: Name Of Query. This makes it easy to visually browse for the query you are looking for.

If you’ll remember from my last post I mentioned that the type of query needs to match the type of log being queried. These prefixes directly or sometimes indirectly correlate to those types so if you have IISW3C logs you need queries for, then queries beginning with IIS: are the ones you want. You can also sort the queries by clicking the column header of the field you wish to sort by.

Another advantage to this is searching. Notice the search box at the top right. To quickly narrow down the visible results in the library to list only the log type you need, simply type part or all of any prefix and click the search (>) button. This is free text search of the query name field so you can search for any text contained in any query name no matter where it falls within that string. To clear the results and show all queries again click the X button or press the escape key on your keyboard.

To open any query just double-click it and it will open in it’s own tab. You can also right-click a query from within the library for a list of context menu options which are as follows:

Open – Same as double-clicking a query.

Run now – This will open all selected queries then immediately execute them.

Add to batch – This will add the selected queries to the batch manager.

Favorites – Adds selected queries to your favorites list. You can add/remove queries you use the most to your favorites for quick access.

Category – Assign the query(s) to different categories. Note: Due to the ease of searching, static categories may be deprecated in the future.

Delete – Be careful! This choice delete all selected queries. You will get a final warning before doing so. Additionally, this only deletes them from memory. The deletes are not final until you formally close LPS without saving the library.

Hidden feature: CTRL+C. Using this key combination on a selected query will copy the query text to your clipboard. This is so you can quickly take a look at the query in a text editor without having to open it or if you wanted to send the text portion of query as-is to someone via email or any other reason.

Quickly edit query meta-data

For basic edits such as a query’s category, name, description, log type etc., its a bit tedious to open it formally, make a simple name change, click save then close the query you aren’t even planning on executing. To get around this, select any query in the library then click F2. This will open a basic editor to change those fields. After clicking save, the changes will be propagated back to the library.

Importing and Exporting

Queries can be imported and exported from an entire library, a single query or a group of selected queries. When importing, multiple library XML files can be imported at once, choosing only the queries from each of those files that you want. Once chosen you have the choice to merge these into the library or completely replace the current library. Depending on your workflow this can be very advantageous. You may have certain queries for certain customers, projects or investigations and so on. You could save small groups of queries for certain tasks, export queries to send to others who can import directly into LPS. Or you may simply want only add queries to the default library and export it somewhere so you have a backup. To access importing and exporting go to File > Import or File > Export.

Hidden feature: You can directly open the existing library your default text editor by pressing CTRL+ALT+L. Please be forewarned that this file must be compliant to its format. Translation: make a typo, possibly even a case-sensitive mistake and you won’t be able to load this library any longer until you fix the issue. If however, you are a more advanced user and you are aware of the risk involved having direct access to the raw data might be of value. Once it is opened you could also Save-As to another location which is yet another method to back up the library. If you are new to LPS and are wondering how to edit an query, don’t do that here, this is accomplished by opening the query in LPS and editing directly in LPS in the query editor window which will be discussed soon.

Backup and Recovery of the Library

The library consists of a single XML file that contains all queries and is stored in the users appdata folder (LPSLibrary.xml). If this file ever becomes lost or corrupted you can recover the default installed library by choosing Help > Recover Library from the main menu bar. However, any queries you have created yourself or existing ones that you have modified will no longer be accessible. If you have custom queries it’s a great idea to use the export feature and export your custom queries and/or the entire library to a backup location. If for some reason you need your queries back you can use the import feature to place them back into the library.


The library is the central storage location for your queries. You can execute, modify, import, export, search for and categorize queries. You can backup and restore libraries or parts of libraries and recover the original default library. Queries can be opened for review, then executed or multiple queries can be executed immediately. You can have multiple libraries or groups of queries to suit your working style. The library is typically your home base for managing all your queries, manage it well and it will server you well. Next up working with the query editor.

Source: https://blogs.technet.microsoft.com/karywa/2013/04/21/getting-started-with-log-parser-studio-part-2/

Getting Started with Log Parser Studio – Part 1

Hopefully, if you are reading this you already know what Log Parser 2.2 is and that Log Parser Studio is a graphical interface for Log Parser. Additionally, Log Parser Studio (which I will refer to from here forward simply as LPS) contains a library of pre-built queries and features that increases the usefulness and speed of using Log Parser exponentially. If you need to rip through gigabytes of all types of log files and tell a story with the results, Log Parser Studio is the tool for you!

None of this is of much use if you don’t have LPS and know how to get it up and running but luckily this is exactly what this blog post is about. So let’s get to it; the first thing you want to do of course is to download LPS and any prerequisites. The prerequisites are:

  1. Log Parser Studio (get it here).
  2. .NET 4.x which can be found here.
  3. Log Parser 2.2 which is located here.

Once everything is downloaded we’ll install the prerequisites first. Run the installer for Log Parser 2.2 and make sure that you choose the “Complete” install option. The complete install option installs logparser.dll which is the only component from the install LPS actually requires:

Next we want to install .NET 4 and you can run the webinstaller as needed. Once it is installed all that is left is to install Log Parser Studio. Oh snap, LPS doesn’t require an install, all you need to do is unzip all the files into a folder in the location of your choice and run LPS.exe. Once you have completed these steps the install is complete and the only thing left is a few basic setup steps in LPS.

Setting up the default output directory

LPS (based on the query you are running) may export the results to a CSV, TSV or other file format as part of the query itself. The default location is C:\Users\username\AppData\Roaming\ExLPT\Log Parser Studio. However, it’s probably better to change that path to something you are more familiar with. To set a new default output directory run LPS and go to Options > Preferences and it is the first option at the top:

Click the browse button and choose the directory you wish to use as your default output directory. You can always quickly access this folder directly from LPS by clicking the show output directory button in the main LPS window. If you just exported a query to CSV and want to browse to it, just click that button, no need to manually browse:

Choose the log files you wish to query

Next you’ll want to choose the log file(s) you want to query. If you are familiar with Log Parser 2.2 the following physical log file types are supported: .txt, .csv, .cap, .log, .tsv and .xml. To choose the logs you need open the log file manager by clicking the orange “log” button shown in the screenshot above. Technically, you can query almost any text based file, more on that in upcoming articles.

In the log file manager you can choose single files, multiple files or entire folders based on log type. Just browse to the logs you care about. You can house multiple file types in the log file manager and only the ones that are checked will be queried. This is very handy if you have multiple log types and you need to quickly switch between without having to browse for them each time:

Note: When adding a folder you need to double-click or select at least one log file. LPS will know that you want all the files and will use wildcards accordingly instead of the single file you selected. If you use the Add Files button then only files you select will be added.

Running your first query

By this point you are ready to start running queries. All queries are stored in the LPS library which is the first window you see when opening LPS. To load any query to run, just double-click it and it will open in its own tab:

The only thing left is to execute the query and to do so just click the execute query button.   If you are wondering why I chose such an icon as this it’s because Log Parser uses SQL syntax and traditionally this icon has always been used to identify the “run query” button in applications that edit queries such as SQL Server Management Studio. If you are wondering why there is another button below that is similar but contains two exclamation points you might be able to guess that it executes multiple queries at once. I’ll elaborate in an upcoming post that covers grouping multiple queries together so they can all be executed as a batch.

Here are the results from my test logs after the query has completed:

We can see that it took about 15 seconds to execute and 9963 records were returned, there are 36 queries in my test library, zero batches executing and zero queries executing.


And that’s it, you are now up and running with LPS. Just choose your logs, find a query that you want to use and click run query. The only thing you need to be aware of is that different log formats require different log types so you’ll want to make sure those match or you’ll get an error. In other words the format for IISW3C format is different than the format for an XML file and LPS needs to know this so it can pass the correct information to Log Parser in the background. Thankfully, these are already setup inside the existing queries, all you need to do is choose an IIS query for IIS logs and so on.

Most every button and interface element in LPS has a tool-tip explanation of what that button does so be sure to hover your mouse cursor over them to find out more. There is also a tips message that randomly displays how-to tips and tricks in the top-right of the main interface. You can also press F10 to display a new random tip.

You can also write your own queries, save them to the library, edit existing queries and change log types and all format parameters. There is a huge list of features in LPS both obvious and not so obvious, thusly upcoming posts will build on this and introduce you the sheer power and under-the-hood tips and tricks that LPS offers. It’s amazing how much can be accomplished once you learn how it all works and that’s what we are going to do next. :)

Continue to the next post in the series: Getting Started with Log Parser Studio – Part 2

Source: https://blogs.technet.microsoft.com/karywa/2013/04/21/getting-started-with-log-parser-studio-part-1/