Store encrypted AppSettings and ConnectionStrings in a database

Do you have connectionStrings and appSettings with potentially sensitive data spread all over your network in various web.configs?

Do you worry about your database userids and passwords saved in source control?

I set out to solve this problem and created DBConfigurationManager. It is available as a NUGET package :

DBConfigurationManager allows you to store your appSettings and ConnectionStrings in a database table. There is nothing you need to do in the code. You continue using

ConfigurationManager.AppSettings[“”] and ConfigurationManager.ConnectionStrings[“”]

When you install the package, it gives you the TABLE script you need to hold the configuration information.


as well as adds a connectionString to your web.config to point to the configuration datastore.


It is a great way to centralize your appSettings and ConnectionStrings. If you are worried about security, you could easily use SSPI and connect to the configuration database using the AppPoolIdentity or Service Account you are running your website under.

New in Version

  • Ability to encrypt your settings in the database.
  • You can either use MachineKey to encrypt or, a secret key using MD5 encryption.
  • Also, included is a tool (look in your bin folder) called StringEncryptor to create your encrypted settings.

TOOL to encrypt your Settings



Indexed Mind – Search your Company’s Brainpower!

How do you track down experts in your Organization? Indexedmind helps you find who knows what in your organization at the click of a button. No more relying on stale documentation and wikis. We engage everybody in your organization to collectively build your Company’s knowledge-base.

Think of it as LinkedIn + Quora for your Company!

We are in Private Beta! Check us out and request a FREE Beta Invite


How to bust/clear DonutCache in MVC

Let us say you have an MVC controller/action which is donutcached like below. And you want to bust the cache for some reason. An example would be. You cache a partial view for 24 hours, but give the user a refresh button to allow him to manually refresh it if he wishes to.

NOTE: you cannot use web.config based cacheprofiles for MVC. It just doesn’t work, so DonutCache is being used for that.

[DonutOutputCache(CacheProfile = "CachedAction")]
public ActionResult CachedAction(string id)
return View();

Here is the web.config

	  <outputCache enableOutputCache="true" />
			<add name="CachedAction" duration="14100" varyByParam="*" location="Any" />

Normally, in the javascript if you just call $.ajax() and request that action, it will just come back with the cached copy. The trick here is to first bust your server cache. So, you can create another action like so.

public void BustCache(string id)
var Ocm = new OutputCacheManager();
RouteValueDictionary rv = new RouteValueDictionary();

if (!string.IsNullOrEmpty(id))
rv.Add("id", id);
Ocm.RemoveItems("controller", "cachedaction", rv);

Finally you would simply make two ajax calls. First, you bust your cache and then you call your regular MVC action to get your content.

$.get('@Url.Action("bustcache", "controller"');
$.get('@Url.Action("cachedaction", "controller")');

IIS Rewrite Rules (force www to non-www and http to https)

  • Redirect www site to non-www. eg. I use the rule below for my own website. If user browses to, redirect them to
    <rule name=”Redirect WWW to non-WWW” stopProcessing=”true”>
    <match url=”(.*)” />
    <add input=”{HTTP_HOST}” pattern=”^indexedmind\.com$” negate=”true” />
    <action type=”Redirect” url=”{R:1}” />
  • Redirect http request to https. eg. if user browses to, force them to go to
    <rule name=”Redirect to HTTPS” stopProcessing=”true”>
    <match url=”(.*)”/>
    <add input=”{HTTPS}” pattern=”^OFF$”/>
    <action type=”Redirect” url=”https://{HTTP_HOST}{REQUEST_URI}” redirectType=”SeeOther”/>

Testing your jquery mobile website in Chrome (Ripple mobile emulator)

If you are developing a Jquery mobile website, Chrome or Firefox serve as a great tool to debug your code and let you play withe the DOM and Javascript on the fly. However, you can never get a real feel of how the site will look/run on a mobile device.

If you are on a Mac, you can obviously use XCode’s IOS simulator which is simply awesome. On Windows you can download Visual Studio Express edition for mobile development and that ships a Windows phone emulator. 

But there is another solution which is best of both worlds. Enter Ripple (download link), a Chrome extension which nice emulates a mobile device. I am in love with it because not only does it let me see how my website will look on a mobile device but it gives me the awesome development power provided by Chrome and its debugging tools. In summary here are the benefits of Ripple

  • View your site in a variety of mobile devices Android, IOS, Windows Phone, Palm etc
  • Use Chrome’s uber powerful tools to do your development
  • Fake GPS/location services. Ripple even lets you simulate as if you are driving around!
  • Test your site on landscape/portrait orientation
  • Simulate accelerometer events.

My personal thanks to the developers of Ripple !



OSX Lion and Windows 7 on the same box (Vmware, Parallels and Bootcamp compared)

I only have Macs in my house. I just kept throwing the Windows machines as they started to die off without replacing them (Yes, all the HPs and Dells are in the trash now). Although, I do lot of IOS and Ruby on Rails programming which is perfect with OSX Lion, but I only do that as a hobby. At the end of the day I have to remember that my 16 years of expertise as a Microsoft developer is what pays the bills :). 

I have tried all 3 options of running Windows on OSX. i.e Parallels Desktop, VMWare and Bootcamp.

Parallels and VMWare seem like the ideal solution at first because you don’t have to reboot your laptop into windows. You click the VMWare or Parallels icon and it fires up the Windows machine in a virtual environment. Sounds great right ? In my experience, it sounds great only in theory. Both solution do work without any problems, but performance is a different ballgame altogether. Running Visual Studio 2010 on the VMs is a pure torture. It is so slow that it is almost unusable. (I have a Macbook Pro, OSX Lion, Core i7 2Ghz with 4 GM ram and I allocated 1.5 – 2GB ram to the VM)

So, I ended up just firing bootcamp and tried to install Windows on the second partition. It did have its challenges but nothing that I couldn’t overcome. Here is what I ran into:

  1. Low disk space – OSX reported I have 350GB free, but Bootcamp kept complaining that I don’t have sufficient space. Weird right? Well, no matter how much Apple makes you believe that OSX doesn’t get fragmented, it actual does. The easiest way to defragment is to backup your machine using Time Machine, format your machine completely (i.e boot with OSX Lion DVD and use disk utility to format your partitions) and restore from Time Machine backup. This is the easiest and free way to defragment your HDD. Once you do this, Bootcamp will be a happy camper and will let you proceed.
  2. Made a mistake of allocating less space to Win 7 – I allocated 50 GB to windows 7 and thought I can install Visual Studio 2010 on it without any issues and it did work great! But then the greedy developer I am, I wanted to install Visual Studio 2012 RC on it too. And that is where I ran out of space. The solution – Reboot into OSX and use disk utility to reduce OSX partition size. Then reboot back into Windows 7 and download this free tool called Minitool Partition Wizard. Fire up the tool, click the Bootcamp partition and then choose extend merge. This will let you extend the partition and use the space you freed up using Disk Utility. (Disk Management built into Win7 will not work)

After all was done, I am loving Bootcamp/Windows 7. It is blazing fast for obvious reasons. Obviously, a virtual VM solution just cannot compare to an OS running on real hardware. I know you are still thinking “But I would hate to reboot my computer every time into Windows!!!!!”. Well, if you fire up the VMware or Parallels VM from a suspended state, you will end up wasting the same amount of time that you would physically booting into Windows 7 too. And when you want to get back to OSX, Lion will be waiting for you with its “remember open programs” feature that I have come to love. 

Backing up and fail-safing your ADAM / LDS instance

This could potentially be very long post, but I am going to stick to the high-level objectives only. Leave a comment if you want more details and I will reply to you.

OBJECTIVE: Your users are stored in an ADAM database and your website sits on top of this utilizing the ASP.Net membership framework to interface with ADAM to authenticate users. You want to make sure that you are covered in case of any disaster scenarios (like disk corruption, ADAM server blowing up, and manually unintended corruption by your system admins)

PROBLEM: If you notice carefully, we are talking two different things here.

  • Hardware failures – i.e Poof!!!! and your ADAM instance just disappears. Panic your website is down!!!
  • Manual data corruption – So, if your sysadmin does something foolish and say he updates all users with the same last name using a vbs script or something. This is more insidious because your website is not down, ADAM is not down but your user data is corrupt.

SOLUTION: So although we have two distinct ways of getting into trouble, the end result is the same and the solutions are also the same. But first let us talk about what is required at the minimum to recover from a failure scenario.

ADAM Replication – Fortunately, for us, ADAM (or LDS) comes out of the box with support for replication. What this means, is once your main ADAM instance is up and running you can install multiple ADAM instances on other servers as “replicated instances” and all these servers magically know how to talk to each other and keep their data in sync with each other.

Plus, it gives you the flexibility of turning on “two way” replication. i.e you change data on the replicated instance and the main ADAM server reflects these changes. You have the option of staggered replication. i.e the replicated instances will receive deltas from the main server only after XX minutes or hours. Any light bulbs yet on how you will use this to recover from bad things happening?

Windows Backups – I know, I know. Nobody uses Windows Backup and Restore. But this is the perfect place to use it. You simply setup a backup job which backs up your ADAM directory to a file server. We have done it where I work and this gives us nightly backups for every day going back to last 60 days. Also, ADAM has a lock on the files on disk, but Windows Backup used Volume Shadow Copy which takes care backing up files even if they are locked by a process. We use the append option, so backups are not overwritten every night, but are appended (Keep an eye on that backup file though…it can grow pretty fast !!!)

So, now that we know the proper way to protect us from bad things. Here is how you will apply it to various situations:

  1. Hardware failures – If your main ADAM server blows up, you could simply point your website to the replicated instance since it has the latest and greatest data.
  2. Data Corruption by SysAdmin – So, if your sysadmin writes a script which updates every one’s SSN to 000-00-0000 for example, you could either restore the ADAM data from last night’s backup (believe me it is amazing simple to overwrite ADAM data from backup and get up and running in no time). Or, if you had staggered replication setup (i.e replicated instances receive change deltas only after 1 hour and your sysadmin reports the data corruption to you in time, you can shutdown the main ADAM instance and point your website to the replicated instances because they still have good data.

Bottom-line, you can use replicated instances for instantly recovering from the failure scenario, while you are busy building the main instance from the backups. If you have two way replication setup then when you bring the main instance online, the replicated instances will send their deltas back to the main instance (so for example 100 users signed up before you could restore the main instance from backups), these 100 users are only in the replicated instance now. But the moment you bring back the main ADAM instance up, replication will send these users back to the main instance and you will be in sync.