iOS scrolling issues in a SharePoint Online Public Website

SharePoint online allows you to create a Public Facing Website (atleast, until March 2017 for existing customers, read more). Although the public website has some limitations, there are still some pretty cool things you can do with it.

For a customer of ours, I have implemented a responsive design using the Public Facing Website. I used the default corev15 styling css, and added an additional css file to do all the custom styling. When testing the website on different devices though, I came across a strange bug. For all iOS devices, the vertical scrolling was ‘groggy’, or simply very very slow. This resulted in a poor user experience, especially for pages which had quite a bit of content.

Turns out there is a pretty easy fix for this. In the corev15 the basic scrolling is implemented in two different css statements, as shown below:




If you want to fix this annoying bug, you can simply add ‘-webkit-overflow-scrolling: touch;’ to the #s4-workspace css rule. Tada, scrolling is now as you would expect it to be in the first place. Gotcha!

If you’re not using a custom masterpage or CSS file, you can simply add the CSS markup by clicking on ‘Site’ and ‘Edit style sheet’ from the ribbon. Add the markup below:

-webkit-overflow-scrolling: touch;

That’s it, you’re all set! Happy coding.


Inconveniently working with App Permissions

Whenever you create an app, you have to think carefully about the permissions that the app needs on order to function correctly. Some apps can execute actions based on the permissions that the user has, but for other apps the app needs its own set of permissions. That is the case for example when you are working with remote event receivers in a provider hosted app.

When it relates to content creation, an app can request four different sets of permissions:

  • Read; (which corresponds to the default Reader permission level)
  • Write; (which corresponds to the default Contributer permission level)
  • Manage; (which corresponds to the default Designer permission level)
  • FullControl; (which corresponds to the default Full Control permission level)

These four different sets of permissions can be requested in one of four scopes:

  • List;
  • Web;
  • Site Collection;
  • Tenant;

It is important to note that whenever your app requests the ‘FullControl’ permission set, that particular app cannot be uploaded to the SharePoint Store.

During the development of your app you might have to switch the permission level that your app requires. On MSDN Microsoft explains how to change the apps permissions after it has already been installed. You can do so by navigating to http://<yoursharepointsite>/_layouts/15/AppInv.aspx , enter the app’s ID and update the XML related to the permission request. However, recently I found out that this in fact does not work in all cases. In one of our apps we used remote event receivers to create a new subweb, which requires the FullControl permission level. Currently we only had the Manage permission, so we had to change that. Updating the app and then re-trusting it unfortunately still resulted in a ‘You do not have the required permissions for performing that action’ error.

Following the above mentioned steps for updating our apps permissions also did not work. In fact, the only way to get the new permissions working properly, was to delete the entire app and then install it again. After that our new subwebs were properly created.

In short; there appears to be a bug in updating the permission request when developing your app and deploying it in an Office 365 developer site. In order to get the new permission request functioning properly, remove the app from your site and reinstall it. Gotcha!

Using REST to query data by filtering on Taxonomy Field

The new SharePoint Api is a powerful thing. It lets you query for items by using REST. However, there is an important thing to note when you are querying items using a SPQuery combined with a taxonomy field filter. Considering you have a taxonomy field that is named ‘CategoryField’ which contains the different categories for certain pages. You would expect the method below to work properly:

However, when you execute this function, the data returned is empty! It turns out that the field you use in your SPQuery has to be the hidden field that actually stores the content of your taxonomy field. So in the above method, change the FieldRef Name=’CategoryField’ to FieldRef Name=’CategoryTaxHTField0’ and try to execute it again. This time, the method will work and you get some functional data back that you can actually use. By the way, did you notice that &$expand=File&$select=Title,File/ServerRelativeUrl bit? More on that later…

Dynamically changing MasterPages in SharePoint 2013 when using Device Channels

Suppose you have a public facing website that has about 40 different page layouts. You are also using the Device Channel functionality, which is new to SharePoint 2013. Basically this means you can set multiple MasterPages, one for each device channel you use. But what if you want to have a different MasterPage for one of the page layouts you use. You have to take into consideration, that when updating the MasterPage from the code behind file, the MasterPage is updated for any and all device channels you use. But what if you only want to change the MasterPage for a certain device channel, but keep the default for all others? That’s where this post comes in.

First let’s take a look at how you can change the MasterPage from the code behind file of a Page Layout. This post by Kirk Evans shows you how to deploy a Page Layout and how to create a code behind file for this Page Layout. This post by Eric Overfield gives us a hint on how to change the Masterpage from the code behind file. As noted, you can only change this property in the OnPreInit method, otherwise the MasterPage will have already loaded and changing the property would result in an error message. MSDN shows the following example:

This would however change the MasterPage for all device channels and remember that we only want to change this MasterPage for a specific device channel. This post advices you to use a HttpContext variable that is called ‘EffectiveDeviceChannel’. The post also warns that this property might not be available everywhere because it is loaded on demand. I have tried to use this property, but unfortunately, as you would have guessed, it was not yet available during the OnPreInit method so I had to find a different solution. Browsing the web didn’t do me any good, and since most of the Mobile classes in SharePoint are sealed, those weren’t of any help either. After inspecting the entire HttpContext, there was one property that I could ‘misuse’ and was the solution to my problem. The HttpContext.Current.Items[‘MasterPageUrl’] was there, and it was showing the URL to the MasterPage that was about to be loaded. Since the device channels all have different Masterpages, this became a solution for me to change the MasterPage only for a specific Page Layout. The code used for this is displayed below:

So, when changing the MasterPage from a Page Layout file when also using Device Channels – resort to using the HttpContext variable MasterPageUrl, because the EffectiveDeviceChannel variable is not yet loaded at that point in the life cycle. Gotcha!

Sequence Dropdown Box using javascript and HTML

Have you ever come across that scenario where you have more than a few input fields and you want to go the user the ability to sort these fields, without having to copy + paste the input several times? In these cases, multiple dropdown boxes next to the input fields specifying the order of these fields can be very useful. This got me thinking, “Well, doesn’t SharePoint already implement something like that, for example when modifying a list view?”. In this post I’ll describe how Microsoft has implemented this and how it can work for you. This example uses only javascript and html, so if you want to do some server side action with it, be sure to modify the code accordingly.

First off, here’s an example of modifying a list view, where you will find these dropdown boxes:


In order to retrieve the code that Microsoft uses for this, I’ve inspected the page and rebuild the same scenario. Here’s the pure HTML that is used:

As you can see, this is nothing fancy. Just a simple HTMLSelect item containing a few options. The onchange method calls a certain Javascript function. This javascript function does all the magic for us, and makes sure that all the other dropdown boxes get their values changed accordingly. The javascript that is used for this is:

The second value specified in this javascript call is the HTMLSelect number of your current item. So for the first dropdownbox, that would be item 0. For the second it would be item 1, etc. The third value is the number of dropdownboxes that are there in total.

So there you have it! A simple HTML and Javascript example of making your website that much more user friendly.

Error calling CreateNewDocumentWithRedirect in IE11 and Chrome

For one of our customers we had to create a custom ‘New Document’ functionality. Since this customer had a lot of different document templates, but all templates have the same fields, the normal way to do this is to create a new content type for each document template. This is due to the fact that there is a one-to-one relationship between a content type and the document template assigned to it. One content type can’t have more than one document template.

Luckily there are some good examples out there on creating a custom new document dialog:

All of these solutions create a popup window in which the proper template is selected. We opted for a different option, because pop-ups are generally UX unfriendly. We created a panel that would become visible when clicking the new button. That panel would be created based on a tree structure defined in the termstore. You store the document templates in a shared location, and also define that location in the termstore when creating the tree, by using local properties on the terms.

In the end it all comes down to one of two javascript calls to actually create the new document based on the appropriate template: createNewDocumentWithRedirect and createNewDocumentWithProgId. Both these solutions work, and while it can be tricky to get the attributes in the proper format, once that is done you should be solid… Until your client uses IE11. When using IE11, rather than creating a new document, the user is confronted with a SharePoint unexpected error “The template must exist in the Forms directory of this document library.”. The same javascript code works fine in IE10 though, or when you render the page in any other mode then IE11. So the template can’t be in an invalid location right? Right.

The difference between IE11 (or Chrome for that matter) and all other modes comes down to how the javascript is handled. Below you’ll find the actual javascript (from core.js) that is being called.

It turns out that the function ‘IsClientAppInstalled’ (called in createNewDocumentWithRedirect, displayed from line 56) returns false for IE11 and true for any other version of IE. This means that the next function called is createNewInBrowser rather than createNewInClient, even though we are positive the client has Office installed. That function is meant for InfoPath based forms, and not for documents, which causes it to crash.

To overcome this issue the solution is easy. We simply called createNewInClient directly, because we are certain all users do indeed have Office installed. Gotcha!

P.S., if there is anyone out there that is interested in the Termstore based New Document solution, let me know and I’ll draft up a different post containing the entire solution.

MCMS2002 migration to a SharePoint 2013 Metadata driven environment

As any SharePoint enthusiast will acknowledge, migrating an existing website to a SharePoint environment is no easy task. This task becomes increasingly difficult when the website is a non-SharePoint website containing 100,000+ pages and documents. Tagging and organizing content in such a way that users will still be able to find it, either through search or through site navigation, can be very difficult – especially when the source content is unstructured. And finally – how can you ensure that all existing URLs are properly converted to their new ones while maintaining the search ranking in the major search providers?

In this blog post I’ll show you, as well as provide you with examples on, how we managed to overcome these challenges.

Case explanation

Our customer had a website based on Microsoft Content Management System 2002 SP1, which had been in place for over ten years. In those ten years, the website had been modified to meet all customer requirements. It worked like a charm. But, because MCMS and SQL 2000 are no longer supported by Microsoft, the customer had to move to a new platform. Since publishing content through their website is one of the customers core businesses, they needed a robust platform that had extensive publishing capabilities. They chose SharePoint 2013.

Along with the platform change, the new website also has a very different way of presenting content to the visitor. Instead of browsing articles though a hierarchic structure, we went for a search driven site through the use of metadata. This meant that the use of metadata would be of vital importance. Because without it, how could you find an article?

This presented us with a major challenge. More than ten years of publishing content meant that over 70,000 pages and 30,000 documents had to be migrated to SharePoint 2013 and had to be provided with additional metadata. Our goal was after all to make the content meaningful, relevant and, most of all, easily searchable.

Underneath you will read in detail how we approached this challenge.

Exporting the existing content

The first step towards migrating the website was to export the existing content. Because we had to deal with different sorts of content (website pages, documents and images), we decided on a generic approach that would work for all file types. In order to meet this challenge, we created a custom tool that exported all content into an XML based file, so there would be one XML file per URL. This XML file contained all the information we needed to be able to create new content into our SharePoint 2013 environment. All webpages and documents were ordered in a logical folder based structure, so we could import based on the year of publication. Webpages published on May 2006 would be converted into an XML file with a folder structure like this: {export location}\{language}\20065\{articlename}.xml . Although the same principle was applied to documents and images, they were located in an extra subfolder named Documents or Images.

The XML files created were the only source of input we had for creating the new pages and their properties. That is why we had to make sure that these files contained as much information as possible. Along with the basic properties, such as publication date and title, we also added a keywords property, which was filled with the most important keywords on the page. As well as the keywords, the theme property was also of great importance for the new website. This theme property could be derived from the URL of the existing content. A small snippet of the final XML is displayed below:

Executing the export tool using the settings and configuration described earlier, resulted in a total of ~105,000 XML files. These files could then be used by the content creation tool for creating new content in our SharePoint 2013 environment.

Creating new content in SharePoint

The next step in the process was to create new content in our SharePoint 2013 environment using the exported XML files. For this process we created custom tooling that used a dynamic approach, with which our customer has the ability to easily control the output generated by the content creation tool. In order to be able to achieve this dynamic behavior, we combined two XML files that form the heart of the content creation tool.

The first XML file describes a mapping that, based on the URL property of the export, determines which contenttype is used for creating the new content, as well as which tag is added for search optimization. Below is a snippet of this XML file:

The second XML file describes which property in the exported XML file is mapped to which SharePoint Field. Below is a snippet of this XML file:

The flow of the content creation tool, which is executed for every input XML file, is displayed below:


First, the contenttype is chosen based on the mapping file shown in Listing 2. Each contenttype has its own collection of fields that need to be supplied with information. For each field in this collection, we retrieved the appropriate value, based on the mapping shown in the earlier code snippit. To read the data from the XML mapping files, we created a generic method which is displayed in the code sample below. After all field values were retrieved, a new page was created based on the mapped contenttype, and the field values were set to their appropriate values. After that, the page is saved and published. Finally, an item was added to the URLRedirectList. This list can then be used to perform user redirection as described in the next paragraph.

Search provider friendly URL redirection

Migrating the website into SharePoint 2013 meant that all existing URL’s were invalid, since SharePoint stores pages and documents in a completely different directory. This would be easy enough to change in the import tooling for all internal links, however in this case we also had to consider the hundreds of thousands of external links to the website that have been created by various sites over the internet. We also had to make sure that all of these links would still be valid! And we wanted to perform the redirection to the new pages in such a way that search providers would maintain the page rating.

To tackle this problem, we developed a simple, yet very powerful, solution. We created a SharePoint list, containing two columns namely the old and new URL. As discussed earlier, each time a new page or document was created during the execution of the content creation tool, a new list item was added to this list. After the content creation was completed, we had a full list of URL’s to which we want our users to be redirected to when they visit an old URL.

In order to make sure that we could intercept a request before the user is returned a 404 error code, we had to create a custom HttpModule. The downside to this was that HttpModules execute on every request, even if the resulting page would not be a 404 error. To make the HttpModule as lightweight as possible, we first checked whether or not the request would end in a 404 status. Because we knew that all existing URL’s end in the .htm extension, this was our second check. Only if both comparisons are true, we would query the URLRedirect list to see if the user had requested an old URL.

The end result of the HttpModule is shown below. Please note that URLRedirectBE is a business entity class used for working with the URLRedirect list items in a strong typed manner. It retrieves a SharePoint listitem based on the oldURL column and maps the field values to public properties.

By using a redirection with a 301 status, we not only made sure that any existing ranking with search providers remained in place, but also when the user visits this URL through the same search provider again, they will no longer use the HttpModule.

In order to register the new HttpModule in SharePoint we created a web application scoped Feature with a feature event receiver which is displayed below.

The result is a user friendly redirection mechanism that takes the existing pages search provider ranking into account.


Migrating a website into a SharePoint 2013 environment is never an easy task. In this article we have discussed our dynamic approach using custom tooling and configuration XML files. The key point for every migration is to focus on what is important for the customer, and make sure that is translated to the best possible migration approach. For us, the generation of metadata throughout the content generation process was of vital importance. This ultimately allowed us to create a new website that had conceptually completely changed the visitors perspective, from a website based on a hierarchal structure to a search driven website through the use of metadata.

DIWUG logoThis post is also published as an article in #13 of the DIWUG magazine. Be sure to visit the DIWUG site for more interesting articles.

I did not do all the work myself, so I would like to thank the entire team. Be sure to check out the blogs from some of my team members as well:
Garima Agrawal –
Sachin Sade –