Tech/IT Uncategorized

How to edit an incorrect commit message in Git

Note: VSTS = Visual Studio Team Services.

Sometimes in git, we end up mistakenly writing an incorrect commit message (especially if working on multiple projects). Now, I know that there are lots of sources online that walk through how to fix incorrect commit messages, but personally having experienced this, it can take time to search around and put all the relevant steps together. So, I decided to create my own walk through guide on how I managed to correct an incorrect git commit message. Hoping this comes in use for anyone who may stumble across such a situaton:

  1. First checkout a temp branch –
    • git checkout -b temp
  2. On temp branch, reset –hard to a commit that you want to change it’s message. (To check the commit number login to either VSTS or Github and track commits under the code branch). Take the commit number to be 946992 for example –
    • git reset –hard 946992
    • You should now get a message similar to: HEAD is now at 94992 <’old commit m’>
  3. Use amend to change the message –
    • git commit –amend -m “<new_message>”
  4. This is an optional step. If the commit being edited is the latest commit then this step can be skipped. However, if the commit has had other commits after it then cherry-pick all the proceeding commits ahead of 946992 from master to temp and commit them. Use amend if you want to change their message as well:
    • git cherry-pick<9143a9>
    • git commit –amend -m <”new m”>
    • git cherry-pick <last commit number>
    • git commit –amend -m <”new m”>
  5. Now, force push the temp branch to remote –
    • git push –force origin temp:master
    • At this point if you get the following error: ![remote rejected] temp -> master ( … You need the Git ‘Force Push’ … ) this means that either VSTS or Github have locked Force command permissions for you. If this is the case, you need to go into either VST or Github and assign yourself permissions to “Force push (rewrite history, delete branches and tags)”. You may have to ask somebody else ‘Directory /owner’ to assign the rights to you, then try again.
  6. Once done, check out of your temp branch –
    • git checkout temp
  7. Delete Master branch locally (make sure it is a capital D)
    • git branch -D master
  8. Then git fetch origin master
  9. Now, finally git checkout master

This will move you back into your master branch locally and you should be able to commence as normal from here.



Microsoft have now released a v2 of Data Factory. Though this is still in preview, it has the handy ‘Author and Deply’ tool; this includes the copy activity wizard to assist create a copy data pipeline. Most of this is the same as v1, however there are changes that have been introduced in this second iteration; I have had the fortune to be able to work with these changes and this blog is exactly about that. I will highlight the differences that Azure Data Factory v2 has brought in (as of the time of writing this), so I wouldn’t be wrong in saying that further changes and difference would most likely be on their way too. I am assuming here that anyone reading this blog has prior experience of using data factory – The following are the differences:

  1. Partitioning via a pipeline parameter – In v1, you could use the partitioning property and SliceStart variable to achieve partitioning. In v2 however, the way to achieve this behaviour is to do the following actions (This applies both when using the Copy Wizard and an ARM Template for the pipeline):
    1. Define a pipeline parameter of type string.
    2. Set folderPath in the dataset definition to the value of the pipeline parameter.
    3.  Pass a hardcoded value for the parameter before running the pipeline. Or, pass a trigger start time or scheduled time dynamically at runtime.
    4. Here is an example of the above from an Azure Resource Manager Template:
      “typeProperties”: {
                              “format”: {
                                  “type”: “ParquetFormat”
                              “folderPath”: {
      “value”:”@concat(‘/test/’, formatDateTime(adddays(pipeline().TriggerTime,0), ‘yyyy’), ‘/’, formatDateTime(adddays(pipeline().TriggerTime,0), ‘MM’), ‘/’, formatDateTime(adddays(pipeline().TriggerTime,0), ‘dd’))”,
      “type”: “Expression”
      “partitionedBy”: [
      “name”: “Year”,
      “value”: {
      “type”: “DateTime”,
      “date”: “SliceStart”,
      “format”: “yyyy”
      “name”: “Month”,
      “value”: {
      “type”: “DateTime”,
      “date”: “SliceStart”,
      “format”: “MM”
      “name”: “Day”,
      “value”: {
      “type”: “DateTime”,
      “date”: “SliceStart”,
      “format”: “dd”
  2. Custom Activity – In v1, to define a custom activity you had to implement the (custom) DotNet Activity by creating a .NET Class Library project with a class that implements the Execute method of the IDotNetActivity interface. In Azure Data Factory v2, for a Custom Activity you are not required to implement a .NET interface. You can now directly run commands, scripts and your own custom code, compiled as an executable. To configure this implementation, you specify the command property together with the folderPath property. The Custom Activity will upload the executable and it’s dependencies to folderPath and execute the command for you. Linked Services, Data sets and Extended Properties defined in the JSON Payload of a Data Factory v2 Custom Activity can be accessed by your executable as JSON Files. Required Properties ca be accessed using a JSON Serialiser. To create an executable for a Custom Activity you need to:
    1. Create a New Project in Visual Studio
    2. Windows Desktop Application -> Console Application (.NET Framework). Be sure you target the .NET Framework and not .NET Core otherwise at build time a .exe will NOT be created.
    3. Add in code files as needed including JSON files i.e. Linked Services etc.
    4. Once done Build the project and then open the project folder \bin\<Debug or Release>\<MyProject>.exe
    5. Upload the .exe file to Blob Storage in Azure (Make sure the executable is provided in the Azure Storage Linked Service Template). When uploading a custom activity executable to blob storage, be sure to upload All contents from the bin\Debug (or Release) folder. Just copy the entire folder to blob otherwise the custom activity will fail, as it will not be able to find any dependencies the application needs to run. Also, use subfolders when uploading custom activities. This makes it future proof in case further activities are added. Best practice for this is to use Azure Storage Explorer in which you can access the storage account and create the container and subsequent folders. This can’t be done directly in Azure because blob is a flat structure, so the concept of folders is none existent for it. However, in Storage Explorer the ‘/’ creates a pseudo hierarchy in the blob, making it a virtual folder.
    6. Create the pipeline in Data Factory v2 using Batch Service -> Custom.
    7. Create a Batch account and pool (if not already created) and set up the pipeline as normal.
    8. Trigger the run and test the pipeline.

Custom Activities run in Azure Batch, so make sure the Batch Service meets the application needs. Whilst we are on the topic of Azure Batch Services; I would like to add a note here on how to monitor Azure Batch Services. To monitor custom activity runs in Azure Batch Service Pool or an Azure Batch Service run in general, use the tool Batch Labs. Once run, you can see the stderr.txt or stdout.txt file for the run details.


Big Data Analytics Series – P3 – Service Account Authentication with Azure Hosting

A recent project required me to use the Google Analytics Core Reporting API for data ingestion. The API call was being made in an Azure function, which worked completely fine locally but failed during the service account authentication process when hosted in Azure with an ‘Invalid provider type specified’ error for the authentication certificate.

3 Days of debugging and research finally led to the answer. The Issue was not actually with the client library, but instead was to do with the way in which the authentication certificate stores X509KeyStorageFlags.

X509KeyStorageFlags define where and how to import the private key of an X.509 certificate:

  • MachineKeySet – Private keys are stored in the local computer store rather than the current user store.
  • PersistKeySet – The key associated with a PFX file is persisted when importing a certificate.
  • Exportable Imported keys are marked as exportable.

This is a known issue with Azure hosting; you need to tell the server how you would like it to deal with the X.509 certificate.

As per the API documentation, to load the private key from the certificate, the following code is needed:

var certificate = new X509Certificate2(@”<certificatePath>”, “<privatekey>”, X509KeyStorageFlags.Exportable);

This line of code will work fine locally, however will fail in Azure because we need to tell the Initializer that the private key(s) are stored in the local computer store rather than the current user store. To do this is simply adding an additional condition to the final parameter of the above line of code as shown below:

var certificate = new X509Certificate2(@”<certificatePath>”, “<privatekey>”, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.Exportable);

If you check the above definition of the MachineKeySet this does exactly what we need it to by telling the Initializer that the private key or keys are stored in the local store rather than the current user store.

So, the final Service Account Credential code is included in the below Github gist link:

Hoping this saves anyone trying to use Service Account Authentication with Azure Hosting (not just with Google Analytics API but any other such API) hours of debugging and time.

NOTE: Replace values inside <> for your own values.


BIG DATA ANALYTICS SERIES – P2 – Setting up a Mock (Local SQL Server) Data source for Data Management Gateway (DATA FACTORY)

This blog walks you through setting up a local instance of SQL Server on your machine in the aim to create a Mock Up Data Source. This blog has been written from  a technical perspective so it is assumed here that you are tech friendly. This procedure is a sub part of data integration between on-premises data stores cloud data stores using Data Factory and falls under the process of ‘Moving Data between on-premises sources and the cloud with Data Management Gateway.’ Links to full instructions for the latter part of the process will be shared below as well, but the prime focus of this blog will be on the pre-requisite aspect (as this can turn out to be a nuisance if configured incorrectly):

  1. Download SQL Express (Developer’s Edition)
  2. Install SQL Express. Select ‘Custom’ During the installation create a User. Make sure to install a ‘Database Engine’ and any other features required.
  3. Now, carry out the following checks:
    • Go into SQL Server Configuration Manager
    • Select SQL Server Network Configuration
    • Protocol for MSSSQLSERVER
    • TCP/IP needs to be enabled. Right click on TCP/IP and select enable.
    • On IP Address tab make sure you scroll down to the IPAll section and set the port number to 1433.
    • Restart the Server. To do this:
      • Go to SQL Server Services
      • SQL Server
      • Right Click ‘Restart’
        • If you have issues restarting, then just restart your machine. The server should automatically start once the machine has been restarted but nevertheless double check in configuration manager.
        • A few spot checks to see if the server is running:
          • ‘ping localhost’ in cmd line
          • Enable telnet to be able to connect to the port:
            • Run command prompt in Admin mode
            • Type the following in command prompt to enable telnet: ‘dism /online /Enable-Feature /FeatureName:TelnetClient’
            • Now open a new command prompt
            • Type ‘Telnet’ and press Enter. This will show the telnet welcome message.
          • Once telnet has been enabled you should be able to connect to local host via telnet giving it the TCP Port.
            • Open a command prompt
            • Type in ‘telnet <IP Address> <Port>’ and press enter. Port here should be 1433.
            • If a blank screen appears then the port is open, and the test is successful.
            • If you receive a ‘connecting …’ message or error message, then something is blocking the port. Most likely this could be a firewall either Windows or Third party.
  4. Connect from command line with the following (connects using SQL and Windows based auth):
  5. ‘C:\> sqlcmd -S <ip-add> -E’. NOTE: You can find the local IP Address by typing ‘Ipconfig’ in command prompt.
  6. Run the following, substituting in your own credentials:
    • Role should be minimum of ‘db_datareader’. SQL allows a user to be allocated one from three roles, ‘db_datareader’ (read permission only), ‘db_datawriter’ (read and write permissions) and ‘db_owner’ (all permisssions).
    • <login_name> and <user_name> should be the same.
      • CREATE DATABASE <db_name>
      • GO
      • CREATE LOGIN <login_name> WITH PASSWORD = N’<password>’
      • GO
      • USE <db_name>
      • GO
      • CREATE USER <user_name> FOR LOGIN <login_name>
      • GO
      • EXEC sp_addrolemember ‘db_owner’, ‘<username>’
      • GO
      • exit
  7. Now, connect with the test account created by running the following in command prompt:
    • sqlcmd -S <ip-add> -d <db_name> -U <login_name> -P <password>
  8. You should now be at SQL Prompt having successfully connected.
  9. You should now connect to the server via SSMS (SQL Server Management Studio) using the above created credentials:
    • Open SSMS
    • Select ‘Database Engine’
    • For the server name, use the computer name if your server is called ‘MSSQLSERVER’. This means you are using an unnamed instance of the server. If your server is called ‘SQLEXRPRESS’ then you are using a named instance and will have to use the following syntax ‘localhost\SQLEXPRESS’.
    • Username: username created above
    • Password: password created above
    • Either authentication method should work.
    • Now, commence in SSMS as normal
    • You can continue and set up the Mock Database Gateway and ADF Pipeline in Azure. Go on the below link to read a full walk through on how to do this. You would also follow the below link for a real data source gateway set-up, just configuring the input data set to be the actual data source:

The same set up can be completed using a VM (Virtual Machine). Follow the same steps but firstly deploy a Virtual Machine in Azure, then connect to it via Remote Desktop and continue as normal.



All Things Cloud at Microsoft

Finally, I have managed to get around to this blog as its long overdue because of the Christmas and New Years, but as they say better late then never!

If you are a regular reader of my blogs then you may recall me mentioning this a couple of times in my previous blogs, but thanks to Elastacloud, I had the opportunity to attend a special Azure Technical Briefing at the Microsoft Paddington Central Office in London. This is an exclusive inside event for Azure/Cloud developers (like myself) where Microsoft reveal all the latest developments in Azure and the world of Cloud Computing, but from a very technical perspective.

It is a very high demand event from which only a handful of candidates are selected to attend. I was lucky because not only did my boss have close ties with Microsoft but the man organising the event is his good friend. Thanks to this I skipped the reservation queue and got direct entry – wicked! In fact, it was my boss himself who had informed me about this Technical briefing encouraging me to sign up and then leave the rest to him, which was super beneficial for me and that’s what counts.

I should have just taken the train but me being me decided to travel by car. Yes. Travelling by car to London, that too on a Weekday and in the morning – not the cleverest of idea’s. I realised this when it took me 4 hours to get to the Microsoft Office from my house (Loughborough to Paddington is 112 miles – 2 hours 19 mins) so you can clearly see how bad traffic must have been. I did have a bit of a panic frenzy thinking I was late (registration started at 9:30am with the day beginning at 10am) as I got there at 10:10am. However, I came to realise that over half of the attendees hadn’t yet shown up, which was a great sigh of relief. Before you say I’m a pretty punctual guy most of the time, I usually arrive at places 10 mins before the start time but sometimes I am also a victim of “Indian Timing”, which I try to avoid if I can.

I was like a kid at a comic con (ok maybe not quite exactly like that but that’s the closest description I can find to explain how I felt). I was buzzing (I don’t think I had used Snapchat throughout the year as much as I did on that day lol). I was snapping my time at the office as much I could, the classy glass building and super cool elevators were quite something. I knew this was the sort of place I had always wanted to visit, and now this wish had become a reality. To add to this ecstatic feeling, I was also dressed in smart clothing (which I enjoy surprisingly) which further boosted my enthusiasm and excitement. I was at MICROSOFT!

At the classy reception desk, I had to check in, take my badge and then got directed to floor 5 where the Briefing was taking place. I thought only developers would be attending the day, but I was joined by project managers, business analysts etc. Not everyone was a coder or from a programming background, which did take me by surprise, but the briefing was organised into multiple sections each targeting different role levels within the business architecture. So, some sections were delivered from a project manager perspective, other from a business analyst point of view, whilst others were very developer based, heavy in code and technical language. It’s obvious where I fit in, so I don’t think I need to mention much on this.

Without going into the technical aspects (as I am sure you don’t want to know or care about this) I will summarise an overview of what was covered in the day:

The briefing was split twofold – Continuous Integration/Continuous Deployment (CI/CD) and Visual Studio Team Services (VSTS).

CI/CD covered the following:

  • Automated Testing
  • Release Management
  • Usage Monitoring
  • Code Reviews
  • Continuous Measurement
  • Feature flags
  • Infrastructure as Code/Infrastructure As A Service
  • Configuration Management


VSTS focused on:

  • Agile Project Management for Visual Studio Team Services
  • Using Pull Requests with VSTS
  • Moving from Subversion to Git
  • Creating CI/CD Pipeline with VSTS into Azure
  • .NET Development in Azure with VSTS
  • Build in Azure and deploy on-premises with VSTS
  • Container based deployments with Docker, Kubernetes, Azure and VSTS

I was fortunate to have already covered some of the topics from subjects from my previous projects at work, so it was good to have prior knowledge and hands on experience to allow me to relate to the information. It was an insightful day and I found it helpful. My favourite topics were Release Management and Infrastructure As A Service (IaaS) – these were really cool!

The icing on the cake was being able to speak and take a selfie with the legendary Edward Thomson, the man behind Visual Studio Team Services. He is the man who wrote the code that merges pull requests for developers – Git Project Manager for Microsoft Visual Studio Team Services. This was truly EPIC (as seen below)!

I also had the opportunity to speak to David Gristwood (Technical Evangelist at Microsoft, the man who had organised this briefing and my boss’ friend) over lunch, which was also a pleasant experience.

The day closed off at 4:30pm (it over ran as it was supposed to finish at 4pm) – I made a few good friends at the event through networking and I hope to be meeting them again at similar events in the future (we have exchanged numbers so regularly discuss upcoming events and attendance). Last but not the least, what is signing out of Microsoft without striking a pose? That’s exactly what I did before leaving – Microsoft, thank you for having me, it was a pleasure!


Big Data Analytics Series – P1 – SQL Error SQL71006 (Data Factory)

This year I plan to create a Series of posts on various technical topics, one of which is Big Data Analytics. These posts will be based on first hand experiences I have had on projects, and tricks and tips on coding for Big Data. Hope this series is helpful and actually comes in use for other developers; sharing my own knowledge with the on-line tech community sounds like a neat idea to me :). I’m also open to feedback or do correct me if you feel I have made an error, so feel free to comment below.

When working on a Data Factory project it is common to use SQL Scripts either via stored procedures or standalone. This is more commonly known as ‘adding a post deploy script’ as part of a database project and then do a build.

This is all well said and done, however, with this comes a common known issue, which can take up so much time (for no reason). It turns out that the C# compiler will moan (at runtime) when you add the SQL script(s) to the project on Visual Studio. There error will be something like:

‘Error SQL71006: Only one statement is allowed per batch. A batch separator such as ‘GO’, might be required between statements.’

Any syntax error or something along the lines of the above will occur. Make sure the SQL Script is first tested and run in SQL Server Management Studio (SSMS) to make sure there isn’t an actual issue with the script. This is a nonsensical error.

To fix this error:

  1. Right click on the .sql script file and select ‘Properties’

2. Set the Build Action to ‘None’

3. Now rebuild and run

Basically, by setting the build action to ‘None’ the script(s) will not compile at runtime, overcoming the error. This is a known issue and let’s hope it gets resolved soon. The above is a workaround for the error and not a permanent fix so do bear this in mind. If the script runs in SSMS there is nothing actually wrong with the SQL, it is just a compiler issue.


Azure Resource Manager – Referencing with Linked Templates

Azure Resource Manager Templates (more commonly known as ARM Templates) serve as a very handy tool for Cloud Engineers. ARM Templates allow you to automate resource deployment in Azure in a “cookie-cutter” type of approach i.e. they are just scripts instructing Azure what resources to deploy and how.
ARM Templates support a concept called Linked Templates. As the name suggests, a linked template is basically an ARM Template stored in a storage location that is linked to via it’s URI into the main template as a resource deployment. This also makes the linked template a sub template of the main template. When using linked templates, there are a few things that need to be taken into consideration; this article will focus on referencing with Linked Templates.
When referencing an external source, i.e. a resource from a sub template (this is especially the case when one resource is dependent on another resource) then make sure the resource type is concatenated to the resource name; if we were to have a data factory resource depend on a storage account (from a sub template) for it’s data set then just defining the storage account name as a parameter in the file will not suffice. When writing the code for the ‘dependsOn’ clause, it would have to have the resource type appended to it like below:
dependsOn: “[concat(‘Microsoft.Storage/Accounts/’, parameters(‘stroageAccountName’))]”
Having just the storageAccountName will error as the template needs to know where to look and what to look for. Also, if we had a storage Account and Data Lake Store with the same name, this would introduce a level of ambiguity into the template, causing it to error on deployment.
Be careful when referencing in ARM Templates. It is not possible to reference a resource from a sub template parameter, instead the parameter will need to be added to the current working template (with the same name). Even though you have the linked template reference, the parameters can’t be pulled in via output referencing as ARM Templates do not support inheritance. It can be argued that this is repetition of parameter definition but (currently) this is the only way to reference sub template parameters.

My first experience at Microsoft’s Future Decoded

‘There is a first time for everything’, as the saying goes. Last Wednesday it was my first time I attended a conference thanks to my company. Being a Gold Standard Microsoft Cloud Partner sure has its perks, not only because we can take part of the Microsoft Developer Network (MSDN) benefits but also being part of the Microsoft Partner Network (MPN), we are first to know of the latest developments, upcoming conferences, insider news and all things Microsoft as with the case of this conference.

This was the fourth year Microsoft have held their annual Cloud Conference ‘Future Decoded’. I had only heard about such large-scale events; always aspired to attend had I got the chance, and I was just really glad that I could finally go. I didn’t’ quiet know what to expect in terms of content (apart from it being really informative) but was looking forward to the freebies (especially stickers).

To begin with, the day had a manic start; the venue was 132 miles from my house (a 2 hours and 20 mins drive) which turned into a 4-hour drive. Yes! 4 hours. I left my house at 4:30am in the morning (I’m a morning person so I’m always an early bird) and was expecting to arrive in London at 6:50am. I ended up arriving at the venue at 8:30am. You may say, having stopped over the night in a hotel the day before would have been a lot easier, but I love my bed too much :P.

The drive felt tedious especially waiting in stand still traffic for 30 mins on the M1 (Motorway). Luckily I had one of my close friends attending as well, so I made a pit stop first at his place to pick him up and then we arrived at the venue together. The drive itself was an experience, felt very like I was actually driving to the Airport (as I always leave at such early hours when I have a flight to catch to India).

Having arrived at ExCel London at 8:30am, my friend and I competed registration and made our way down to main conference centre where the opening Keynote took place. Seeing the room full of tech nerds all of different levels was truly buzzing, I look around and see the conference hall jam packed with people. Just then the lights dim down and the spotlight shines mid stage on the Microsoft Logo plate assisted by two cinema size projector screens on either side of the stage. I’d only seen this stuff online till now, but now I was actually here. This was so cool.

The opening Keynote was super interesting and I was gripped. I’m not going to go into the actual details of what was discussed, that’s rather too technical, and I also don’t want to bore you, so will keep it to the point and short.

Having my mate there was a bonus; exploring the Expo together was awesome. Following the Opening Keynote, everyone then split up into their relevant workshops, luckily we had already arranged our agenda beforehand so knew where to go and when. The workshops didn’t all turn out as good as I expected, out of the two I attended one wasn’t as informative as I had advanced prior experience of the API however the first workshop was great. Lots to learn and very informative. Good thing I went prepared, I took my notepad and pen with me to scribble down an overview and breakdown of the workshops which I then fed back to my CEO who was also present there along with my colleagues.

I was disappointed as there was no freebies that were being given away but to compensate that the Expo included free professional photos taken by LinkedIn themselves; I remember queueing up for about 40 mins to get mine taken, and it was well worth the wait. Alongside the free photo booth, they had Adobe, and other various media and cloud organisations present, and not forgetting my company was also there representing the Azure User Group UK and Inspiring Women in Data Science user group. Though I didn’t speak my colleagues ran the UK Azure User Group workshop. This was a huge success so to thumbs up!

Whilst exploring the Expo I managed to get my hands on the Holo Lens. This was a superb experience; having the opportunity to play with Virtual Reality, funny enough I had not done so till date (which I myself find extremely weird seeing how much I love tech).

Without going on for much longer now, the day ended with a closing keynote from some more Microsoft Employees (some who had flown in especially from Redmond) and Gary Neville, former Manchester United footballer. You may ask, as did I when I first found out that he was speaking at the conference, what can a footballer possibly have to say about Tech? It turned out he was here to promote a new partnership venture with Microsoft and the University of Lancaster called UA92 (University Academy 92). A new incentive to train pupils between 16-21 with 10 core principles that were deemed as crucial for making good future leaders.

All in all, the day was a fruitful experience, I got to learn a lot and was continuously buzzing from the tech atmosphere. I would definitely consider going again next year. The future surely is decoded.


‘Refit’ – REST Library for .NET Standard

Most developers (if not all) have/will have to connect to API endpoints to retrieve some data to use in some shape or form. The same goes for us, recently, whilst developing some Azure functions for a client we had to get back data from my own API that had been created for them. (Just as a side note, we used PFA – Portable Format for Analytics to create my scoring engine in Python exposing two endpoints in my model which became my API endpoints).

The obvious route which we initially took was to use the HttpClient Class to GET and POST our requests, however this could get messy very quickly especially when it would come to managing multi-threading. It was at this point that we discovered the ‘Refit’ REST Library.

‘Refit’ is a REST library for .NET that is inspired by Square’s Retrofit library. It basically turns your API into a live interface allowing you to call the methods that you have defined and in return the library will provide an implementation of the interface (that you have declared) through dependency injection. This overcomes the management of loading data from the endpoints and handling asynchronous tasks (which we would have had to do had we used the HttpClient Class).

Your interface would look something like the following:

where the endpoint is “/score”. As you can see we wanted to POST so the word ‘Post’ is appended before the endpoint. Similarly, if we wanted to GET then we could append ‘Get’ in the same way.

Also note here that we are using our own custom class ‘Search’ as our ‘Body’ argument i.e. the metadata that we will be passing into our method which the library will use to communicate to the API Endpoint. The argument would have to match the API model (and what it accepts in its request), but because we created our own API we had full control on it.

We defined our Metadata in it’s own class as such:

The additional class would be needed if you were using a custom object, otherwise hardcoding the argument will suffice.

All you then need to do is make a RestService.For<YourInterface> call and that’s it. This would look something like this:

And that is all there is to it. Seriously! Just two lines of code? Yes! The beauty of the library is that it handles all the asynchronous tasks, threading, data loading etc and a bonus is that because it is a .NET Standard library you can use it everywhere. Usually the API call will return some JSON Data and this would then need to be deserialised using JSON.Net (would highly recommend the best and most awesome JSON.Net Library – Newtonsoft.json for all your JSON needs), and vice versa. However, in our case as it is a custom class, our API is returning a sentiment score of type double.

The ‘Refit’ library can be added from Nuget Package Manager by searching for ‘Refit’ and then ‘using Refit’ at the top of your file (Nuget will handle installing any dependencies the library needs so no need to worry about that).

NOT: It is advised that you test the API you wish to use with a toolchain app like ‘Postman’ to make sure the endpoint is working correctly and so that you can also validate what is being returned is correct. Then make use of refit and query in the same way from your code.

‘Refit’ makes it very easy to develop quick strongly-typed REST Client in C# for almost anything with minimal effort. Its super cool and quite fun to us too so give this one a go for sure!


Azure Function Tools: Now part of Azure development workload

Until now, it wasn’t possible to locally debug Azure Functions. However, with the release of Visual Studio v15.3, this has now become possible. As part of the Azure development workload, Microsoft have introduced ‘Azure Function Tools’ that lets you locally test your Azure Functions.

This toolkit came at a very appropriate time, as recent project that we have worked on required such a functionality so it was fully utilised for our testing purposes. Yet there are some pre-requisites before the toolkit can be installed:

  1. Install Visual Studio 15.3 (or later)

When you have started installing Visual Studio, you should get a screen where you should be able to select various different workloads. Select ‘Azure development’. This will install the relevant SDK and tools needed including ‘Azure Function Tools’. If you already have Visual Studio 15.3 (or later) installed and wish to just install the standalone ‘Azure development’ workload then follow the below steps:

  1. Find Visual Studio Installer on your computer



  1. Click to start the installer, and then select Modify.

  1. From the Workloads screen, select Azure development.


  1. Click Modify
  2. After the workloads have been installed, click Launch.

Now that we have all the tools we need installed, we can start using the toolkit right away. Click on File -> New Project and then Azure Functions type (under Visual C# -> Cloud pane on the left-hand side).


Once the project has been created, you would code as normal using classes, interfaces etc. The important file however is the local.settings.json file which is created upon project creation. This is equivalent to Azure Function ‘Application Settings’ in the Azure Portal. The local.settings.json file is where the developer stores setting information such as connection strings used for running the function on the development machine.

Note: For all trigger types except HTTP, you need to set the value of AzureWebJobsStorage to a valid Azure Storage account connection string.

To add more Functions to the project right click the project and choose “Add Item”, then choose the “Azure Function” item template. On the launch dialog select the sort of function trigger you require and providing a name for the functions and a connection string.

You can debug as develop locally as normal, debug it, add NuGet Packages, create unit tests and the like.

To then publish the app in Azure, right click on the project and select ‘Publish’. On the dialog that opens you can either create a new Azure Function or add to an existing function.


Note: The folder option is not intended to be used with Azure Functions at this time, though it is shown in the options.


And That’s it! As easy as that. This tool is definitely worth the effort and time so give it a try!