Regular Expressions Support in SharePoint 2010 Crawling

Search admins often need to omit from a crawl files that match a certain pattern. E.g.:

· In a bank, file names starting with SSN

· In a business site, files names with credit card number

· URLs having specific value of a certain parameter of an aspx file

· etc..

The usual solution is to allow admins to create “crawl rules” that restrict crawlers from following specific links. The most basic crawl rule specifies a complete URL for the file to be crawled, which requires the admin to create as many rules as there are files in their repository. A more practical solution often implemented involves the use of the wildcard character: “*”. This character matches everything, so admins can create a rule using the wildcard to include (or omit) all files under a particular folder or path:


This works if all the files are located neatly in one folder, but what if they are spread across the repository (or Web site)? This is the problem that is solved by using regular expression (RegEx) syntax.

The SharePoint Solution

In SharePoint 2007, the wild card operator “*” is the only operator supported in crawl rules for matching characters. As mentioned, it is a brute force operator that matches everything. Wildcard-only rules do not provide the admin the flexibility to, for example, recognize and omit URLs that contain Social Security Numbers, or that have an aspx parameter with a specific value.

SharePoint 2010 includes some new capabilities in this area. The default behavior of crawl rules in SharePoint 2010 is the same as it was in SharePoint 2007, but with SharePoint Search 2010, administrators can create crawl rules to include or exclude URLs that match regular expressions. To enable regular expressions, the admin need only select the check box on the Crawl Rules creation UI as shown in the image below.


Regular Expression Operators


The table below lists and describes the regular expression operators that are supported for crawl rules in SharePoint 2010:





example Rule

Will match

Won’t match



Characters can be grouped using round brackets. Any operator applied on it will be applied on the group.

Match any character


This operator matches any character. It does not match with NULL.




Match zero or one


It allows the expression to not exist in the target address or can have only one repetition.


http://mysite/page.html AND http://mysite/page1.html


Match zero or more


It allows the expression to not exist in the target address or can have any number of repititions.


http://mysite/page.html AND http://mysite/page111.html


Match at least one


It requires the expression on which it is applied to exist in the target address at least once.


http://mysite/page1.html AND http://mysite/page111.html


Exact count


This operator is denoted by a number inside “{}”, e.g. {5}. It restricts the expression on which it is applied to have exactly the specified number of repetitions in the target address.




Minimum count

{num, }

This operator is denoted by a number inside “{}” followed by a "," e.g. {5,}. It restricts the expression on which it is applied to have at least the specified number of repetitions in the target address.


http://myfiles/9999-00.html AND http://myfiles/99999-00.html


Range count


This operator is denoted by 2 numbers inside “{}” separated by a "," e.g. {5,8}. First number defines lower limit and second number defines the upper limit. It restricts the expression on which it is applied to have any repititions in the URL between num1 and num2. A valid rule will always have num1 < num2.


http://myfiles/9999-00.html AND http://myfiles/9999-000.html




This operator is applied on two expressions and it matches ONLY one of the two expressions.


\\myshare\folder1\<any files> OR \\myshare\folder2\<any files>

\\myshare\folder1folder2\<any files>


[ <list of chars> ]

This operator is denoted by a list of characters inside “[]”. It matches with any of the characters which are specified in the list. Admin can specify a range of characters by using "-" operator in it.


http://testhost/test1.htm OR http://testhost/test2.htm OR http://testhost/test3.htm


Using RegEx Operators in Crawl Rules

Once you understand the RegEx operators above and how to enable them in the crawler, there are only a couple other things you need to keep in mind:

Protocol part

Regular expression operators cannot be used in the protocol part of the URL. This means, for example, the following RegEx rule cannot be created:


If you try to create a rule like this, the system will add http:// in the beginning and thus make “.*” as the second part of the URL. The resulting rule in this case will be:

http:// .*//*

which may not be what you intended.

Case sensitive comparison

RegEx rules are case insensitive by default. In order to allow a rule to do case sensitive matching of a URL, the administrator should select the “Match case” check box in the rule creation UI as shown below:


If the “Match case” checkbox is selected, the crawler will preserve the case of matching URLs during the crawl. In the example above, the rule will match: http://test/AbC123.html and WILL NOT match to http://test/Abc123.html.

This feature comes in handy when SharePoint is used to crawl web sites hosted on Unix based web servers, which are case sensitive.


Here are some interesting examples demonstrating the usefulness of Regular Expression in crawl rules:




Match everything under the share “myshare”


Match all the links with file names having the following pattern: <4 digints>-<1 or 2>-<4 characters>.docx


Match all files in folder1 or myfolder in \\myshare


Specify a regex operator: "?" in this case, in regex rule.


Match all links pointing to myasp.aspx with either param1 or param2 specified.


Match all aspx links that have a specific parameter value and ignore the value of second parameter


Match all files that start with Social Security Number


 Syed Anas Hashmi | SDET | Microsoft Enterprise Search Group