How to use Ctrl+W to close a tab in Visual Studio

All the browsers I know give you the option to close tabs using Ctrl+W in addition to the standard Ctrl+F4 in Windows. I find Ctrl+W easier to reach than Ctrl+F4, but as a developer I switch back and forth between Visual Studio (or VS Code) and my browsers and sometimes by mistake I press Ctrl+W in Visual Studio. That’s why I looked around in Tools > Options to see if there is a way to change this shortcut in Visual Studio once and for all. It was right there waiting for me.

Here is the way to do it:

  1. Select “Tools > Options” from the menu.
  2. In the “Options” window look for “Keyboard” under “Environment“.
  3. In “Show commands containing:” textbox type “Window.CloseDocumentWindow“.
  4. There’s no surprise that the command is assigned “Ctrl+F4” by default. You need to remove it by clicking “Remove” button in front of “Shortcuts for selected command”. You might need to press a few times if there are more shortcuts assigned to it.
  5. Make sure “Global” is selected for “Use new shortcut in:“.
  6. Press “Ctrl+W” in “Press shortcut keys:” box.
  7. Don’t forget to click “OK” at the end to save your changes.

At the end, you should have something like the following picture.

Visual Studio Options - Keyboard shortcut for Windows.CloseDocumentWindow is set to Ctrl_W (Global)
Visual Studio – Options – Keyboard
How to use Ctrl+W to close a tab in Visual Studio

How to use PowerShell Invoke-WebRequest behind corporate proxy

Corportate proxies are one of the productivity killers for developers. They are not well supported in every utility and framework and each tool has its own litrature to set proxy settings. To add salt to the injury, not every tool supports NTLM authentication well which is quite common in many proxies. Companies have to sometimes make exception rules in proxy settings that can further complexify matters.

In case of PowerShell you do not have to worry much. Let’s see how you can set proxy for Invoke-WebRequest for example. Other commands usually support proxy settings similarly.

$pass = ConvertTo-SecureString "P@ssw0rd" -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential -ArgumentList "contoso\george", $pass
Invoke-WebRequest -Proxy "" -ProxyCredential $cred

In line 1, we store the password in a SecureString object. In Line 2, we create a new PSCredentual object by providing the username and password and finally in Line 3, we call Invoke-WebRequest using -Proxy and -ProxyCrendential parameters.

Let me give you another alternative. Did you know you can also ask the proxy settings required for a URL from your OS and even use the current user’s credentials?

$dest = ""
$proxy = ([System.Net.WebRequest]::GetSystemWebproxy()).GetProxy($dest)
Invoke-WebRequest -Proxy $proxy -ProxyUseDefaultCredentials
How to use PowerShell Invoke-WebRequest behind corporate proxy

Custom Component vs Script Component

Why this comparison?

Recently while working on a (relatively) huge cloud migration project, I had the opportunity to brush up on my SSIS knowledge that was getting ancient. I had to work on several SSIS projects and pave the way for my fellow developers so we could all move faster.

When working on SSIS projects, often times there is need to write logic that cannot be achieved by using out-of-the-box components from the toolbox. In those cases the first option that automatically comes to mind is Script Component. This component lets you write code in your language of choice to act as a source, destination or transformation. You always also have the option of making your own shiny custom component. It is generally believed that making a custom component is usually more complicated and it takes more time so most of the times developers use Script Component throughout the project and it leads to duplicating code that is not maintainable and every change is hard to track to a point that in larger projects no one is really sure how each part works anymore 🙂

What’s wrong with Script Component

Script Component looks pretty good in theory. It gives you the possibility to embed code in a data flow. It can act as a source, destination or a transformation. The problem is in the way the code is embedded in the package. If you open the XML content of a package that has a Script Component is it, you will notice that it is actually a full Visual Studio project with a random generated namespace and some generated code to make it easy to interact with fields through typed properties. When you double click on a Script Component, SSDT decodes that projects and (presumably) writes all the files of that project in a temporary folder and opens it in a new instance of Visual Studio. When you close that project after applying your changes, it has do do the reverse. If you put a break-point somewhere in that project, then again it has to do all that again. So, to sum it up:

  • Script Components take large space in SSIS packages and if you create several Script Components in a package, it can dramatically increase the size of the package hence slowing down development and stability of Visual Studio and hinder productivity
  • Each Script Component is an independent project that makes debugging them harder specially that they are encoded and embedded within the package’s XML.
  • The fact that all Script Components are encoded and embedded within the XML file will make it really hard to do SSIS development in a team and it is very difficult to trace a change no matter which source control software you use. Imagine comparing thousands of lines of encoded XML to see where some logic might have been changed in some lines of scripts.
  • The debugger in Visual Studio might not work properly (even crash) when you want to debug multiple Script Components. It is also not possible to have breakpoints in more than one Script Component
  • Readability is another big issue. You will not be able to easily skim through your Script Components to understand the logic. Imagine you will need to open each one by one slowly and if you miss something, you will need to back again to a previous one. It is not like a typical Visual Studio project where you can easily navigate through code with several out-of-the-box features.
  • The name of the columns you get in the Script Component might not be exactly the same as original columns, if you decide to use generated typed properties.
  • It is not possible to reuse the logic that is inside a script component, unless if you externalize it in another assembly and in that case managing versions of that external assembly can become tedious, because you might need to open every single Script Component that references that assembly should the version change and if you by mistake do a breaking change you will not find out during compile time. You will need to test every single script component using that assembly and there is no easy way to find out which Script is referencing that. so, in short you will need a full end to end test to find an issue that you could be found in a compile in other types of projects.
  • It is not easy to duplicate a Script Component. I know that it is against DRY rule to duplicate code, but lets say due to above limitation in special scenario you decide to do so, in that case because the assembly name in those Script Components will be the same, during compile time one of will replace the other without SSDT giving you an error or even a warning.
  • This one is actually a bug. If you use more recent versions of C# language (I don’t know for VB, but probably the same) none of your break point will ever be hit. So, it is better to forget about those shiny features that came with C# 6.0+ like simplified dictionary initializer (i.e. { ["key"] = value }).

What can justify using a Script Component?

In my opinion with all the above problems around Script Components, the only reason you should use it is when you need to create a one off component as part of your data flow or when your are going to have just a few minimal Script Components in your package. In all other cases that I have experienced it is better to create a custom component. You might be thinking that building a custom component can be difficult or you might have already tried to find your way around MSDN to figure out how to do it, but got confused with the minimal explanation and over simplified examples that exists. But, trust me on this, it is not that difficult if you do it right. In my next post, I’m going to show you just that, but with a clear step by step guide and a real world example.

Custom Component vs Script Component

Setting development machine behind corporate proxy + authentication

Developing in companies that have proxy servers for developers can be frustrating in this age when every tool needs access to online resources and even parts of software development life cycle are cloud based. Proxy servers that require NTLM authentication just add to that frustration. NTLM is developed by Microsoft but many applications built by Microsoft do not support it or require some configuration and in worst cases some hacking to make it work. Below is a list of some the tools that developers might be using on a daily basis and what you need to do to make them connect via NTLM proxy. I keep adding more to the list as I encounter them.

  • Visual Studio Code (VSCode)
  • NPM
  • Visual Studio, Web Platform Installer and other .NET Applications

Visual Studio Code (VSCode)

VSCode 1.15 and up now supports NTLM proxy (finally Microsoft supported its own authentication protocol).


For NPM you have two options. Either to send proxy address and credentials in every single command you run or to set them in the global configuration of NPM. I recommend the former because it is more secure.

Set proxy in every command

When calling NPM command you can always use --proxy switch to set proxy for each command. The syntax for using this switch is the following.

--proxy username:password@proxyaddress:port

For example to use myproxy:8080 as proxy address and my-domain\reza as username and P@ssw0rd as password when calling the install command you can type the following.

npm install --proxy http://my-domain%5Creza:P%40ssw0rd@myproxy:8080

Please note that both username and password are URL encoded. You can use the following command in your browser’s developer tools to encode them.


Set proxy in NPM configuration

To set the proxy in the global configuration of NPM you need use the same format as above for sending the proxy address, username and password and use npm config set to store it in the configuration. For example to set the proxy address to myproxy:8080 and username to my-domain\reza and password to P@ssw0rd you can use the following command.

npm config set proxy http://my-domain%5Creza:P%40ssword@myproxy:8080

Visual Studio, Web Platform Installer and other .NET applications

To set the proxy for pretty much any .NET application, you need to put the following in the configuration file of that application.

    <defaultProxy useDefaultCredentials="true" enabled="true">
        <proxy bypassonlocal="true" proxyaddress="http://myproxy:8080" />

For Visual Studio I suggest also enabling IPV6 if the above configuration did not work as suggested by some other developers.

        <ipv6 enabled="true"/>
    <defaultProxy useDefaultCredentials="true" enabled="true">
        <proxy bypassonlocal="true" proxyaddress="http://myproxy:8080" />

For executable files the configuration file is named the same as executable’s file name but with .exe.config extension. For Visual Studio it is called devenv.exe.config and for Web Platform Installer it is WebPlatformInstaller.exe.config.


At the moment Postman (v7.0.4) does not support NTLM authentication and the only best way that you can make it work is by using Fiddler. Here is what you need to do, step by step:

  1. Install Fiddler and run it.
  2. Make sure Rules > Automatically Authenticate is selected. This will enable Fiddler to authenticate on behalf of Postman with your current user account.
  3. * In Postman, go to File > Settings and then Proxy and turn on Global Proxy Configuration
  4. For Proxy Type select both HTTP and HTTPS
  5. For the Proxy Server, use (If you you have changed the default port that Fiddler is listening on you will need to change it here as well).
  6. ** In Fiddler, go to Tools > Options... and then in HTTP tab, select Capture HTTPS CONNECTs and Decrypt HTTPS traffic and install the certificate when prompted.
  7. In Postman go to File > Settings > General, turn off SSLS certificate verification. You need to do this because currently, Postman does not support intermediate proxies.

* In step 6, instead of using Global Proxy Configuration you may also use Use System Proxy, but in that case you need to make sure in Fiddler, Capture Traffic is selected under File menu. This way Fiddler will capture all the HTTP traffic by setting Windows Proxy settings.

** You won’t need to do step 6 and 7 if you won’t be working with HTTPS URLs.

Setting development machine behind corporate proxy + authentication

Developing NodeJS apps using AngularJS 4 in macOS

Well, that is one long title for a blog post! but, bare with me I’m going straight to the point. Recently I’ve been wanting to dip my toes in the AngularJS 4 world and to make it more exciting I thought maybe I will do it on a Mac using NodeJS and VSCode. Now that VSCode is getting more and more popular sharing my experience here might actually help someone and a future me to have an easier ride than mine today.

The ingredients

  • A computer with macOS on it. Any version would do.
  • Homebrew
  • VSCode
  • Git client for Mac
  • NodeJS
  • Half an hour of time!


We first need to get the environment ready before we start to develop something and believe it or not there are several ways you can do that. After studying and trying out different ways, this is the best way I found that is the safest, fastest, easiest, most repeatable and undoable, and yet inline with best practices of Apple in Mac world!

  1. Install Homebrew if you don’t already have it.
  2. Install NPM
  3. Install VSCode or any other code editor / IDE
  4. Install Git client
  5. Install NodeJS
  6. Make a folder
  7. Start VSCode

1. Install Homebrew

Although NodeJS website provides an installer for NPM,  I suggest using Homebrew to install NPM and any other open source package you might need in you Mac. NPM installer needs admin rights and it will put files in folders that only admin users have access to. You don’t want any 3rd party application tinkering with those areas do you?! plus it will be easier to cleanup afterwards should you need to remove NPM for some reason in future.

I recommend you visit Homebrew’s official website to know more about it. At the time of writing this blog post, you just need to open Terminal and run the following command to execute Homebrew’s installation script. The script will explain what it does and then pauses for your confirmation.

ruby -e "$(curl -fsSL"

2. Install NPM

Now that you have installed Homebrew, installing NPM is as simple as typing the following command in the Terminal window. It will install the latest stable version.

brew install node

After the installation is done, you can type the following command to see which version is installed.

node -v

If you need to update Node and NPM in future you can use brew upgrade node and to uninstall them you can use brew uninstall node.

3. Install Visual Studio Code

This step is pretty easy. You just need to download and install Visual Studio Code, Microsoft’s lightweight code editor from its official website and install it like any other application. Here is the official download page:


There are two ways you can start to write a new application by AngularJS. You can either manually set up the project by creating the required boilerplate files and writing configurations which can really help you understand what is going on behind the scene or you can use the Angular CLI to do it for you and this way you save some time. I’m going to explain both methods here and first I am going to start with the manual method. Instead of explaining the manual method, I’m going to show you how to use Angular CLI because that’s the way you’re most probably going to do it in future. Then I will explain all the files and configurations in such a way that you can even do it manually if you wish to.

[To be continued!]



Developing NodeJS apps using AngularJS 4 in macOS

How to mimic Pivot Table or Categories in Number 3+

Perhaps you are already aware that Apple has cut many features from latest versions of Numbers and their other MS Office like products. One of the most popular features that has been removed from Numbers is called Categories. By activating Categories on a table you could aggregate data and summarize values see an overview of what matters to you. Recently I was working on a list of values and since the list was not bigger than what Numbers can handle and I didn’t need the speed and power of Microsoft Excel. I though I will give Numbers a try. I still like the fact that you have a free canvas in Numbers that allows you to put put many tables independent from each other in one page. I think it is the only advantage of Numbers to its alternatives. Otherwise with just a few dollars per months you could have access to the latest version of Microsoft Office regardless of your OS or even directly in your browser plus Microsoft gives you a terabyte of online storage! OK, let’s get back to work before I change your mind 😉

Let’s start with an example like the following.

1 Category Amount
2 Blue 11.20
3 Red 15.89
4 Red 10.30
5 Orange 32.12
6 Green 15.39
7 Blue 10.18
8 Green 24.76
9 Green 89.31
10 Orange 8.75
11 Blue 15.28

The first column is just containing the order of each row, the second indicates the category of each row. This could be anything like category of expenses in an expense report. It is actually the column that we are going to aggregate. The third column contains the values. It could be the amount of each expense for example.

Now, let’s say you want to aggregate it to the following table. But you want Numbers to do it for you so each time a series of row is added to the table you don’t need to calculate everything manually.

Category Sum
Blue 36.66
Red 26.19
Orange 40.87
Green 129.46

Since I want to do the aggregation in the same table, I need to find a way to detect distinct categories. Then per each category I need to calculate and display the sum of each category in front of it. If I display only the first occurrence of each category and hide other occurrences the problem would be solved. Let’s split the problem to smaller pieces as usual. Now the first question would be how do we find distinct categories?

Step 1 – Detecting distinct categories

Let’s add a new column called “Test” then put the following formula in the first cell of the column below the header: =COUNTIF(A$1:A1,A2) in other words you need to put it in C2 cell. Now click at the bottom of the cell’s rectangle and drag it to the bottom of the table. This will fill remaining cells of the column. The result will be the following table.

1 Category Amount Test
2 Blue 11.20 0
3 Red 15.89 0
4 Red 10.30 1
5 Orange 32.12 0
6 Green 15.39 0
7 Blue 10.18 1
8 Green 24.76 1
9 Green 89.31 2
10 Orange 8.75 1
11 Blue 15.28 2

As you can see, you only get zeros for the first occurrence of each category and that means we can detect them with an IF. Now, let’s put the following formula in cell C2 instead: =IF(COUNTIF(A$1:A1,A2)=0,"*","") and as before drag the corner of the cell until the bottom of the table which will give us the following result.

1 Category Amount Test
2 Blue 11.20 *
3 Red 15.89 *
4 Red 10.30
5 Orange 32.12 *
6 Green 15.39 *
7 Blue 10.18
8 Green 24.76
9 Green 89.31
10 Orange 8.75
11 Blue 15.28

Now for first occurrence of each category we are displaying “*” in column C.It means that we are able to detect the first occurrence of each category and display an arbitrary text in it. What about displaying SUM of each category there. This would be the final solution.

My final solution

In this step we are going to replace the “*” in the formula with SUMIF(Category,C2,Amount) this way the SUM of each category will be calculate in front of each occurrence of that category. All you have to do is to put the following formula in C2, then drag the bottom-right corner of the cell until the bottom of the column.
The resulting table will be similar to the following. I just renamed the Test column to “Total per category”. You can use formatting to distinguish the aggregate row or even hide other rows if you like.

1 Category Amount Total per category
2 Blue 11.20 36.66
3 Red 15.89 26.19
4 Red 10.30
5 Orange 32.12 40.87
6 Green 15.39 129.46
7 Blue 10.18
8 Green 24.76
9 Green 89.31
10 Orange 8.75
11 Blue 15.28
How to mimic Pivot Table or Categories in Number 3+

How to warm-up SharePoint or other web applications and WCF (SOAP) services with PowerShell

There are many reasons you might want to warm-up a web application occasionally. It can be after a fresh deployment or on a regular basis after recycling application pools. Some times you might also need to warm-up SOAP services without going through front-end.

It might seems to be any easy task specially if you have PowerShell 3.0 or higher on your servers, but after Googling a while and reviewing some of the top hits I discovered that each solution is missing a part. Some only work in a single server scenario and some has forgotten that each HTTP response might contain links to scripts and images that we need to download and finally I could not find anything for SOAP services that just works. Long story short I decided to put together a simple script that just works and is easy to change to everyone’s needs.

Please note that my only assumption is you have PowerShell 3.0+ in your servers.

Currently the script takes care of the following tasks, but I will most likely improve it to cover other useful scenarios.

  • Calling SOAP operations and sending parameters and custom headers
  • Calling front-end URIs and downloading scripts and images that are local to the front-end
  • Logging to a configurable folder and file name
  • Cleaning up old log files

Currently, I have the following points in mind to improve the script.

  • Put configuration in a different file.
  • Improve function definitions (named and typed parameters and description).
  • Default values for parameters when it makes sense (e.g. log folder can be the current folder).
  • Support REST services.

I’m open to any suggestion and feature request. Please let me know if you found it useful or if have got something wrong.

How to warm-up SharePoint or other web applications and WCF (SOAP) services with PowerShell