An interesting paper describing why do computers stop and possible ways to mitigate this. Written by Jim Gray, from Tandem Computers, it shows the results of study on system failures in large, distributed systems and ways to prevent them and improve MTBF.
As all Azure web applications are directly available over Internet, hence public, most of the times I need to have some form of protection for them, like an Web Application Firewall. And in Azure, I have at least two options for that:
- Using an Application Gateway with WAF in front of an Azure Web App
- Using Azure Front Door with WAF in front of an Azure Web App
Both options suffer from the same basic problem. The web application is still public and can be directly accessed using it’s URL (appname. azurewebsites.net).
I know that I can restrict access to the web apps by IP addresses but in a complex setup, with one web calling another and so on, I would like first to have inter web apps traffic not going over internet and second, a more elegant solution to restrict access to the web apps themselves, something like as in the diagram below:
In preview, Azure now has the “Private endpoint connection” functionality, which allows the creation of a private endpoint for a web application. This means that once created, the web application is no longer accessible from Internet but only from Azure networking resources (VNET, subnets, and so on).
Also, traffic from the web app is directed over Azure Private Link, the private endpoint being assigned an IP address from the VNET to which the web app is integrated to and so there no traffic over Internet. More than this, if you have an Express Route or VPN connected to your on premises resources (such as a database) then traffic between Azure web application and on premises resources will go trough Private Link and Express Route or VPN. Advantages:
- The web app can be secured, by eliminating public exposure
- Secure traffic flow between web application and on premises resources, over VPN or Express Route
- Avoid data exfiltration from VNET
Private Endpoint created is used only for the incoming traffic to your web application. Outgoing traffic will not use this Private Endpoint, but the VNET integration feature.
Note: The VNet integration feature cannot use the same subnet as Private Endpoint, this is a limitation of the VNet integration feature.
Ok, I can secure the web application by not allowing any public traffic to it, but I still want it to be Internet accessible and protected by WAF. In this scenario, you simply put an external facing Application Gateway in front of the web application, as in the diagram above and the web application can accept traffic from Internet, traffic which is filtered by Application Gateway WAF.
If the web app needs to be accessible from other Azure VNETs or on premises networks then, instead of a public facing Application Gateway, you can put an internal facing one, with the same results (web app can also be accessed internally, from other VNETs using VNET peering).
This Private Endpoint feature I find especially useful when I have more than one web app, which are called one by the other. Instead of setting IP restrictions to each web app (and making sure that IP from the calling web app is whitelisted by the called one), I can integrate them all with private endpoint, so making sure that traffic from one to another is allowed (because they are usually on the same VNET) and having all public traffic denied. And in front of the entry point web app I can put an Application Gateway with WAF to be able to securely access it from Internet.
Note: Using an Azure Front Door instead of a public facing Application Gateway will not do the trick. Web app will still reject traffic coming from Front Door.
Setting up Azure Private Endpoint integration is quite simple:
- Assuming there is already a VNET and the corresponding subnets created, first step is to integrate the web application with a VNET and a subnet.
- As highlighted above, you will need a different subnet for the Private Endpoint than the one you have integrated the web app with.
- Create the private endpoint and from that moment on, incoming traffic will be restricted to only Azure as source traffic. Incoming traffic will go trough the Private Endpoint and outgoing traffic will go trough the VNET integration subnet.
- Once cretaed the Private Endpoint, DNS provided by Azure will cease to work and a Private DNS Zone will have to be created. Microsoft has details on it, here. Basically, a private DNS zone with the name of privatelink.azurewebsites.net will have to be created and registered with the VNET in which the Private Endpoint has been created
Once created the Private Endpoint, then you can proceed with an Application Gateway creation and web app will be secured.
Running less software sometimes is more when you consume it as a service.
“ Run Less Software: If a component has become a commodity, we shouldn’t be spending precious development time on maintaining it, instead we should be consuming it as a service.
In the history of enterprises this is controversial, but even containers are now run and operated as a service. If your engineers aren’t building data centers any more, why are they building container platforms? “
Short summary from Part I
In Part I, I’ve discussed how I’ve analysed the applications landscape and what criteria were used to prioritise the modernisation efforts.
- Upgrade to a Windows OS version with LTSC
- Upgrade to a .NET Framework or .NET Core version with LTS
- Classify you applications according to systems of a record, systems of differentiation and systems of innovation for a better prioritisation
- Investigate options for reducing operational overhead (move to PaaS or SaaS solution, use containers, use DevOps)
- Make an in depth application assessment looking for:
- Source code availability
- 3rd party components
- Running versions of Windows OD, .NET, .NET Core
- Type of application (desktop, web, Windows Service, IIS Service)
- Technologies incompatible with moving the app into cloud
- Local storage
- Embedded logging
- Embedded config parameters
- State management
- Hostname, DNS dependency, localhost dependency, etc
- Rights for the application to be able to run properly
- Application security (authentication and authorisation)
- If you want to port the application, make a list of technologies that are used by the app and are not compatible with .NET Core and seek for alternatives
- Windows Communication Foundation WCF
- Windows Workflow WF
- ASP.NET Web Forms
- .NET Remoting
- Check the use of entity framework
To further evaluate the applications, I’ve used a set of Microsoft tools that can provide an evaluation of the current application state. They are available as extensions to Visual Studio or standalone tools.
- .NET Portability Analyser – is a tool that analyses assemblies and provides a detailed report on .NET APIs that are missing for the applications or libraries to be portable on your specified targeted .NET platforms. The Portability Analyser is offered as a Visual Studio Extension, which analyses one assembly per project.
- .NET API Analyser – The .NET API Analyser is a Roslyn analyser that discovers potential compatibility risks for C# APIs on different platforms and detects calls to deprecated APIs and comes as a NuGet package.
- .NET Framework Analyser – You can use the .NET Framework Analyser to find potential issues in your .NET Framework-based application code. This analyser finds potential issues and suggests fixes to them and also can highlight any issues that need to be addressed prior to moving to a new version of .NET Framework or .NET Core.
2. Select a modernisation approach
The approach to modernisation and the prioritisation of investment (development effort) have to be dictated by business priorities (business strategy) and requirements. Also, data gathered from the in depth assessment of the application have to be taken into account when establishing a modernisation approach.
In my landscape, the architectural drivers for modernisation were:
- Improved stability – critical applications were having a high fault rate and the maintenance effort was pretty high due to deprecated technologies
- New functionalities – had requests from business for new functionalities, which were cheaper (effort wise) to implement if we first modernised the app and moved it to cloud
- Ability to respond to customer faster – had requests from customers (trough business demands) for new functionalities that couldn’t be provided in a short time frame, short enough to become a market advantage
- Quicker bug fixes – as we own majority of the source code for our landscape, that wasn’t a very important driver but it can be added to the list
- Improving scaling capabilities – this was an important driver for us, as N-tier applications are quite difficult to scale horizontally (taking advantage of the cloud elasticity capabilities) and vertical scaling has it’s limits (both technically and financially)
- New market challenges – again, N-tier based applications are not very agile and keeping up with services offered by competitors is hard to do, especially in a dynamic sector like banking and finance
From an operational overhead point of view, following things had to be improved by modernisation:
- Applications with obsolete functionalities, requiring large amount of support work – I’m having in the landscape old applications, which now are used only at a fraction of their functionalities but still critical therefore requiring high SLAs and dedicated support staff
- Skills – well, not every average .NET developer we can hire still remembers .NET Remoting and how to work with it
- Internal standards and IT strategy – as we are trying to move in an Agile direction (this implies also a change in our technical thinking, more APIs, REST services, containers, DevOps approach) obsolete technologies are becoming more than a technical debt, more like a barrier. Try to run a .NET Remoting app in a container or try to horizontally scale it
Cloud migration options for applications
According to industry specific guidelines (Gartner and others) we have the following options when migrating applications to cloud:
- Lift-and-shift aka Rehosting
- Refactor or Rebuild
Each of these methods has its trade-offs. Modernisation efforts fits into every one of the above mentioned approaches (except lift-and-shift, which is not a modernisation per se) as my long term goal is to move most of the applications to the cloud.
Lift-and-shift – It means that basically we have taken or apps from on premises data center VMs to cloud VMs (ok, plus some additional networking to support the infrastructure). This was the first wave of cloud migration in my landscape and it worked pretty well for the intended purpose which was to lower the operational costs and maintenance overhead. Also we have gained some increased uptime (it’s nice to have a VM Scale Set configured in a couple of minutes instead of hours or days) but that was not modernisation, merely tinkering around the edges of the application rather than make significant changes.
So, whenever you want to reduce the operational and support overhead and (or) cannot make significant changes at application code level (maybe you don’t have the source code anymore, maybe the app is making extensive use of obsolete technologies, maybe the cost vs benefits ratio is not making the business case) then lift-and-shift approach will do just fine.
Revise – In the first step, we tried to get away with minimal changes to the applications, goal being able to move them to PaaS cloud solutions (Azure Web Apps) and on the long term to use Windows Containers. What we had done for this was just to update .NET Framework to an LTS version and solve the eventual incompatibilities and also solve the no-go cloud technologies that were used by the app (hard coded config parameters, local file system usage).
Replatform – It’s kind of a middle ground between lift and shift and rearchitect of refactor. It involves a bit of modifications to the code to take advantage of the new cloud infrastructure. For example, for a part of my landscape I’ve decided to replatform, I’ve did some changes, like:
- Instead of internal message queues we switched to Azure Service Bus
- No more local file system storage allowed, switched everything to storage accounts
- Use of Azure Web Apps deployment slots for test/dev/staging environments
In this change, we achieved some cost reductions (compute density for Web Apps it is anyway better than for individual VMs), better release management, increased SLA and lower operational overhead (no internal effort needed for Web Apps) and also laid the ground for the next wave of modernisation.
Rearchitect – It involves a complete rearchitecting of the app to better suit the cloud environment (Azure in my case). This involves significant alterations to the app. It has also the advantage that we can target specific cloud services (like AKS or AWS Elastic Beanstalk or AWS ECS). We are in process of doing just that with some of the most revenue intensive applications, using a microservices approach.
Rebuild – Basically, you will rewrite the app from scratch, trowing away the existing code base. In this case, first ensure there is a valid business case for it and support (and funding) from the stakeholders. I have some candidates for this kind of work, but the business case is not justifying it.
When choosing a modernisation approach, evaluate also if you can replace it with SaaS solution. If the app is supporting business processes that are making little value or they are not a differentiation (think of HR, facility management and so on) then it makes a lot of sense to just replace it with a SaaS offering.
Another viable option is to just retire an application. If the business process has been updated and no longer requires it or of the business is not using it anymore (or using just 10% out of it, maybe a report or two but you still support it) then it’s a good idea to just retire it. To quote Gregor Hohpe “If you never kill anything, you will live among zombies“.
Choose an Architectural Approach
As I advanced further with application assessment, it become clear that I should choose at lest one architectural approach for the modernisation efforts (actually, I’ve chosen two approaches, depending of the business value of the applications). I’ve had a list of constraints like:
- Increase delivery agility
- Increase applications capability to further innovate and sustain change
- Lower running costs
So I was beginning to look into APIs and microservices and I came up with a concept inspired by Gartner’s MASA proposal.
MASA stands for Mesh Apps and Services Architecture and is an agile architecture composed of decoupled apps, mediated APIs and services. It includes architectural principles like decoupling apps components using APIs, create services of optimal granularity (see Domain Driven Design) and it advocates designing fit-for-purpose apps. Each component has an API (or at least will consume an API), and when all connected, they will form a mesh of interconnected services and applications.
Using this approach, allowed us to simplify somehow the apps, to use .NET Framework or .NET Core in various apps and also enabled us to use polyglot persistence (with SQL and NoSQL).
We sliced the applications in kind of microservices. And I say kind of microservices because we didn’t followed exactly all the guidelines to segregate the apps down to the lowest level but actually to what was comfortable for the development teams and from requirements perspective. And we end up having macroservices.
A good reason for this is that we are starting from monolithic applications, some of them being quite big (think of a core banking app). In this case is not practically to go directly to by the book microservices, instead we are developing new functionalities as micro or macro services and in the same time we are extracting functionalities and redevelop them as standalone services (see strangler design pattern) .
Mediated APIs apply one or more mediators to manage communication between an API consumer and the service that implements the API. API mediation reduces the complexity of managing multiple back-end services and increases the choice of technologies and models used to build services and in this scope, another core component is an API Gateway.
What I can say is that going to a macroservices (or microservices) way is not an easy thing. We had long internal discussion about setting the scope border for each service, sometimes we got back to the drawing board, realising that initial scope is wrong (too broad, or too narrow). Also, when applying a strangler pattern and started to slice a monolith, data persistence problems and databases consistency along the old monolith and the service which is sharing most of the same persistence repository are hard to deal with.
As we started this process of modernisation about 9 months ago, we are now in full traction on it. The initial assessment and in depth analyses did take about 3 months and since then we are in full development phase. AAs we started with some low hanging fruits, now we already see some benefits.
Costs for running applications that have been modernised and moved to Azure (with full shebang on cloud functionalities) is definitively lower than before. Also support and maintenance overhead is lower and horizontal scaling works like a charm when things are done properly.
We have improved our SLAs and decreased the incidents, meaning we have now a happier business with willing to going further investing in this modernisation.
We also had many obstacles. For the first, is start to decide from where to start because in the previous landscape. monolithic apps were so tightly coupled that any change would brake a lot of things.
Also, when modernising an app that depends of other apps, a lot of interim solutions must be provided, until we could modernise also the other apps.
We had to implement an ODS (Operational Data Store) just for decoupling apps, in the first stage of the modernisation process (ODS was a good idea, we will keep it for good).
Overall, things are looking good, investment was worth making it and it started to pay off.