I recently attended VSLive! San Diego. The conference focused heavily on web development and cloud architecture. There were several Microsoft presenters, but the majority were 3rd party Microsoft MVPs and consultants/authors, so I believe it’s a good representation of the state of the industry (not just a MS sales pitch). The overall message I took away is that there is a paradigm shift away from monolithic N-Tier applications, towards microservice-based cloud architectures and web front end. Cloud architecture and “Dev Ops” was a recurring theme, with deployments mostly using Docker/Kubernetes. There is a shift away from virtual machines and toward using containers for deployment. I’ll try to summarize my takeaways, specifically in how it affects us at my job.
Microsoft has formalized their Modern Lifecycle Policy, with “Long Term Support” and “Current” (I’m calling it Short Term Support) release cycle conventions….. even numbers (now at .NET 6.0) are LTS which is now 3 years, whereas the odd numbers are STS, aka “Current”, which are 18 months - just long enough for you to upgrade to the next long term version. They are trying to change the old .NET Framework monolithic upgrade approach (where each upgrade just comes every few years and it’s a huge ordeal), to a more iterative approach, considering .NET Core (now just called “.NET”) is far more modular so you can pick and choose which packages you need, and to some extent have some version tolerance between them (at least minor versions within the same major version).
.NET Framework 4.8 will continue to be supported as part of Windows, so existing apps on that will continue to work for the foreseeable future. They’re not going away. However, they won’t have any new features going forward. So the general recommendation is that you don’t need to rush out and re-implement everything in .NET, but any new development should be done via the modern platforms… just try to leverage as much C# code reuse as possible. I believe we are on the right track here in porting our common AV libraries to .NET Standard, so they can continue to be used from the existing Framework app, but also be available to use within a Maui or Blazor app going forward.
Maui is the UI app framework for Windows, Mac, iOS, and Android client applications. Linux is currently not officially supported, but it’s scheduled to be added next year. The general consensus is that you should do everything you can in a progressive web app, and only create a mobile app (ios/android/etc) if there is a good reason to do so. The keynote, of course, highly praised Maui, but a couple of follow up technical sessions showed that there were several limiting bugs and missing features in 6.0, but the good news is the 7.0 (official release due out this fall) adds several missing features and bugfixes to the point where it seems like a viable framework for our needs. The upgrade from 6 to 7 is basically additive, so no breaking changes to speak of. The 7.0 pre-release is already available so we can start working with it now, but it does not have a “go live” license yet. Maui-Blazor hybrid seems like a very powerful approach, as you can reuse Blazor code (including UI!) inside a Maui app. So it seems that primarily targeting blazor for common components is going to be a great way to leverage code reuse as much as possible.
Virtualization and Deployment
There is a paradigm shift in terms of virtualization technology, cloud architecture, scaling, and deployment. I think this is very relevant to our hosted systems offering. With Docker containers, there is much less overhead in terms of both machine resources and human resources to manage them as compared to VMs. With containers, you start with a stock image preloaded with the necessary infrastructure (e.g. a MS SQL 2022 container), add your own layer of files/apps/configuration/etc, and then save that as an image. This image can then be deployed and managed simply with scripts and/or UI dashboards such as the Docker app or Kubernetes interface. In terms of resources, you don’t have to waste the extra couple of gigs on each Windows VM, so you can pack more containers than VMs onto a host machine. Containers run more efficiently so you get more bang for the buck out of your host machine. I believe if we can eventually move toward this approach it would simplify both our on-premises deployments and our managed hosted systems. We would currently be limited to Windows containers due to having .NET Framework, but if/when we finish migrating to core (“.NET”), that will open up the option of using linux containers. We could use linux containers for the SQL portion now, if that would help any.
SQL Server 2022. The biggest change that would benefit us are the new query optimizer performance improvements. The new feature Parameter Sensitive Plan Optimizations is supposed to fix the “parameter sniffing” issues we’ve seen in the past. It has to do with how the optimizer previously tuned its query plan based on the parameters used the first time it was executed, which may not be a good indicator of future queries. The new approach is much more sophisticated, and it uses conditional branching within the query plan based on the parameters passed in each time. Hopefully this will provide a speed up for our larger customer databases. You can turn it off if ever needed, by using the SQL compatibility level setting. There are several other interesting features of 2022, such as “R” and Python integration, new aggregate types, dates, row level security, temporal versioning, and so on… but most of those would involve T-SQL changes, which means we’d have to raise our minimum requirements to 2022 in order to utilize these new programming features within the main AV product. I suspect we don’t want to impose that on everyone just yet. But for the customers who can upgrade to 2022, hopefully they’ll get a performance boost without changing any actual AV code.
Regarding SQL Azure, there is a new feature called Stretch which means you can store certain tables in the cloud and certain tables local. You can even set up a single table to “stretch” to the cloud based on a partitioning function E.g. the “data warehouse” of rarely accessed historical data could be offloaded to the cloud, and leave only the heavily accessed data local. It will require some research to see if/how this may be of use in our scenario. Of course SQL Azure is rather costly, so it may only make sense for certain use cases.
About security and web apps, they are now pushing Open ID Connect with a single sign on (e.g. Sign in with Google/Facebook/etc) over in-house proprietary username/password handling. The OIDC authenticated user would be mapped to our internal users such that our existing site/activity-based authorization would still be implemented in our code, but we’d offload the nasty details of secure authentication and password reset emails and all that to the big providers. Ideally we should move in that direction for any new web development projects going forward.
There is plenty of exciting new stuff to dig deeper into! I’ll be focusing on Blazor, containers, and security and how those can help us modernize our development stack and cloud deployments.