Azure - Tag | Developer Support https://devblogs.microsoft.com/premier-developer/tag/azure/ Advocacy and Innovation Wed, 21 Feb 2024 20:59:28 +0000 en-US hourly 1 https://devblogs.microsoft.com/premier-developer/wp-content/uploads/sites/31/2018/10/Microsoft-Favicon.png Azure - Tag | Developer Support https://devblogs.microsoft.com/premier-developer/tag/azure/ 32 32 Azure DevOps Pipelines: Discovering the Ideal Service Connection Strategy https://devblogs.microsoft.com/premier-developer/azure-devops-pipelines-discovering-the-ideal-service-connection-strategy/ Sat, 24 Feb 2024 07:00:45 +0000 https://devblogs.microsoft.com/premier-developer/?p=41767 John Folberth explores various configurations, decisions, and pros/cons that should be evaluated when deciding how your DevOps environment will deploy code into Azure. About This post is part of an overall series on Azure DevOps YAML Pipelines. The series will cover any and all topics that fall into the scope of Azure DevOps Pipelines. I encourage […]

The post Azure DevOps Pipelines: Discovering the Ideal Service Connection Strategy appeared first on Developer Support.

]]>
John Folberth explores various configurations, decisions, and pros/cons that should be evaluated when deciding how your DevOps environment will deploy code into Azure.


About

This post is part of an overall series on Azure DevOps YAML Pipelines. The series will cover any and all topics that fall into the scope of Azure DevOps Pipelines. I encourage you to check it out if you are new to this space.

Introduction

When an organization is trying to configure their Azure DevOps (ADO) environment to deploy into Azure, they are immediately met with the dilemma on how their DevOps instance will execute the deployment against their Azure Environment. This article will go over the various configurations, decisions, and pros and cons that should be evaluated when deciding how your DevOps environment will deploy code into Azure.

This article will not talk about the nitty gritty details on how to configure the connection. This is covered in MS documentation. Nor will we discuss which type of authentication should be created. There are additional resources that will cover this. This article instead will focus on questions such as “How many Service Connections should I have?”, “What access should my Service Connection have?”, “Which pipelines can access my Service Connection?”, etc…

Deployment Scope

This question on how to architect our Service Connections, the means by which Azure DevOps communicates to Azure, will be the main focal point of this piece. Deployment Scope, for the purposes of this article, will refer to what Azure Environment and resources our Azure DevOps Service Connection can interact with.

This answer will vary depending on your organization’s security posture, scale, and maturity. The most secure will be the smallest deployment scope and will then entail the most amount of overhead, while on the flip side the least secure will have the largest deployment scope and least amount of overhead associated with it. We will cover three scenarios and the associated pros and cons: One Service Connection to Rule Them All, a Service Connection per Resource Group, as well as a Service Connection Per Environment.

As for what access the identity from ADO should have in Azure I typically recommend starting with Contributor as this will provide the ability to create Azure resources and interact with the Azure Management Plane. If your organization in leveraging Infrastructure as Code (IaC) I would also recommend leveraging User Access Administrator, to provision Role Based Access Controls and allow Azure to Azure resource communication leveraging Managed Identities. This is effectively the same permission combo as Owner; however, if you are familiar with Azure recommended practices, Owner permission assignment is not recommended in the majority of cases.

Check out the series in the Healthcare and Life Sciences Tech Community here.

The post Azure DevOps Pipelines: Discovering the Ideal Service Connection Strategy appeared first on Developer Support.

]]>
Load testing your applications using Azure Load Testing, JMeter and GitHub Actions https://devblogs.microsoft.com/premier-developer/load-testing-your-applications-using-azure-load-testing-jmeter-and-github-actions/ Thu, 20 Jul 2023 07:51:38 +0000 https://devblogs.microsoft.com/premier-developer/?p=41397 Dominique St-Amand shares an introduction on JMeter concepts and goes on to create a basic test plan to load test a sample application and run it through a GitHub Action workflow. I’ve been working with more customers that are starting to take testing (unit, integration, end to end and load testing) more seriously. You may […]

The post Load testing your applications using Azure Load Testing, JMeter and GitHub Actions appeared first on Developer Support.

]]>
Dominique St-Amand shares an introduction on JMeter concepts and goes on to create a basic test plan to load test a sample application and run it through a GitHub Action workflow.


I’ve been working with more customers that are starting to take testing (unit, integration, end to end and load testing) more seriously. You may ask, “Dom, really? I thought testing was trivial”. Unfortunately not. As we’re entering an era where businesses are producing software like never before, relatively speaking, these businesses are not software companies. They are seeking to prioritize the speedy creation of business value while disregarding the importance of testing. Development teams, most often than not, are under pressure when the applications they develop do not perform the way they intended it to work, after being released. Testing is not engrained within their DNA. If it would have been, the extra stress and anxiety associated with debugging the problems post release would be mitigated.

Testing is a critical part of the Software Development Lifecycle (SDLC). Load testing, unfortunately, is a type of testing that not many are aware of. What is the maximum number of users that the application or system can handle before performance is impacted? How does the application or system behave under peak loads and sustained loads? What are the areas of the application or system that are most impacted by increased load? Those questions are only the start of a conversation which can be answered when doing load testing.

From experience, many are not aware of load testing because many businesses are not equipped to perform these types of tests. Development teams do not have the proper tools to simulate load. This is why I’m really excited that Azure introduced the Azure Load Testing service. Azure Load Testing is a fully managed load-testing service that enables you to generate high-scale load. The service simulates traffic for your applications, regardless of where they’re hosted.

The Azure Load Testing service comes with a quick test screen when using the UI. In this post, we will be load testing using Azure Load Testing JMeter capability. Apache JMeter is a Java application designed to load test functional behavior and measure performance. JMeter allows us to configure much more advanced load testing scenarios. One great advantage of using the service is the fact that Azure Load Testing service allows us to have a maximum of, as of the time of writing, 45 engines running in parallel. Their recommendation is to have 250 threads per engine (think of threads as virtual users), however this is actually a soft limit – you can use the engine health metrics to monitor the load on the instances and adjust the number of threads accordingly. This means that you can have a multitude of simultaneous users on your application at one time. Quite difficult to achieve unless you have real traffic!

To get started, I will give a light introduction on the JMeter concepts and go on by creating a basic test plan to load test a sample application I’ve put together. I will then show you how to run this test through a GitHub Action workflow. This flow and setup can help you in your regression testing.

Continue reading Dominique’s post here.

The post Load testing your applications using Azure Load Testing, JMeter and GitHub Actions appeared first on Developer Support.

]]>
Deploying an Azure APIM Self-Hosted Gateway https://devblogs.microsoft.com/premier-developer/deploying-an-azure-apim-self-hosted-gateway/ Sun, 25 Apr 2021 07:55:48 +0000 https://devblogs.microsoft.com/premier-developer/?p=40194 With so many customization and integrations options, organizations can leverage these powerful Azure services for a variety of architectures and applications. Self-hosted gateways help improve performance, while ensuring secure and efficient API traffic.

The post Deploying an Azure APIM Self-Hosted Gateway appeared first on Developer Support.

]]>
Bryan Soltis explores Self-Hosted Gateways to provide secure, on-prem API access with cloud-based Azure APIM Management.


When working with APIs, how traffic is routed is a REALLY important topic. Whether it’s for security, latency optimization, performance improvements, or admins are just into that sort of thing, companies often want to have complete control over where and how users access their APIs. While Azure API Management (APIM) offers a great cloud-hosted API management solution, this may present a challenge when local traffic needs to stay in the neighborhood. Luckily, Azure APIM’s provide a self-hosted API gateway to ease the struggle.

Why APIM Self-Hosted Gateway?

Companies implement Azure APIM to control access to their APIs. By implementing subscriptions and products, administrators can ensure every request is authenticated and validated, while protecting their backend system. When those are on-premise, there a number of reasons why a company would want to keep all the traffic within their network. They are:

  • Only internal traffic If all traffic to an API is from internal users, it may make sense to keep all the communication within the network. There is little benefit in having calls go out the network to the cloud and back.
  • Reduced bandwidth costs Because most cloud platforms charge for data out of the cloud environment, keeping traffic internal for the duration of the operation can cut down on bandwidth usage significantly.
  • Reduced latency between systems Because the client and API are around the corner from each other, there should be a lot less latency between the systems.
  • All traffic stays local and secure With all API traffic kept in the network, security concerns are usually mitigated by the existing IT safeguards. This simplifies implementation as APIs and their consumers can operate freely within the confines of the established corporate network.

Enter Azure APIM Self-Hosted Gateways. This feature allows you to provide secure, on-prem API access with cloud-based Azure APIM Management. They are fantastic way to improve internal traffic communication and performance, with all the benefits of a centralized, cloud-hosted management experience. Double win!

Continue reading on Bryan’s blog

The post Deploying an Azure APIM Self-Hosted Gateway appeared first on Developer Support.

]]>
Dangling DNS and Subdomain Takeovers https://devblogs.microsoft.com/premier-developer/dangling-dns-and-subdomain-takeovers/ Tue, 23 Mar 2021 07:04:24 +0000 https://devblogs.microsoft.com/premier-developer/?p=40151 This post explores what is commonly referred to as a “Dangling DNS Subdomain Takeover” and why you never delete a resource that backs a CNAME entry in your DNS without first redirecting or removing the CNAME record first.

The post Dangling DNS and Subdomain Takeovers appeared first on Developer Support.

]]>
Andrew Kanieski takes a look at what’s known as a “Dangling DNS Subdomain Takeover”.  It’s a common way for bad actors to gain unintended access to hosting a site in your subdomain.


It’s a busy work week, your backlog seems never-ending, you’re rushing to get things pushed out to production. You think I’ve got a new configuration for my Frontdoor that I want to deploy, I’ll just tear down the old one and push that ARM template to deploy it’s replacement. You fire off the delete command. Once it’s done you push the latest scripts for deployment and go get coffee. You comeback to find that although the delete was successful the deployment failed. You check the error logs, “Name already in use”.

You think, meh, no problem, I’ll just run the deployment, maybe the delete hadn’t fully committed before the replacement was deployed with the same name. You run it again, “Name already in use”. You triple check. Same. You go to your resource explorer looking for the Frontdoor with the same name. It’s not there. What’s going on??

You go to visit your application to see if it’s running, you swing over to app.sample.com which should, by way of a CNAME entry on your domain, route you directly to your Frontdoor. You find that the website takes you to some other website. Another website, being hosted under your subdomain. Have I been hacked?

The scenario I describe above is what’s known as a “Dangling DNS Subdomain Takeover”, and is a common way for bad actors to gain unintended access to hosting a site in your subdomain. Let’s break down how it works!

Check out the full story on Andrew’s blog.

The post Dangling DNS and Subdomain Takeovers appeared first on Developer Support.

]]>
Calling a Helper API in an Azure APIM Inbound Policy https://devblogs.microsoft.com/premier-developer/calling-a-helper-api-in-an-azure-apim-inbound-policy/ Mon, 22 Mar 2021 07:30:59 +0000 https://devblogs.microsoft.com/premier-developer/?p=40146 With Azure APIM, you can completely control how developers consume your services. Through policies, you can transform data, validate requests, integrate backends, and probably cook the world's best cheeseburger. This powerful feature enables complex systems and architectures to be seamlessly connected, ensuring your data and process stay safe.

The post Calling a Helper API in an Azure APIM Inbound Policy appeared first on Developer Support.

]]>
Bryan Soltis explores how to incorporate API authentication into APIM.


Azure API Management is quickly becoming one of my favorite parts of the Azure platform. From SOAP (shudder) to REST APIs, developers can quickly register and secure their existing interfaces using Azure APIM. By implementing policies, they can transform requests, validate client/subscription information, check rate limits, and a whole mess of other features. And yes, there is a policy to allow you to call a completely different API as part of your process. Let me show you!

With Azure APIM, developers can register and expose their APIs, regardless of where they are located. The built-in developer portal and subscription/products system allows for an extremely customizable consumer experience, as organizations can tailor their API offerings to their users’ needs. It’s an extremely versatile tool in the Azure arsenal and one every API developer should know.

Recently, I had a client that was looking to migrate hundreds of existing APIs to Azure APIM. Part of this change would be to support their existing client credentials/logins that they validate with a home-grown API within their network. The challenge was how do they incorporate that authentication API into their APIM calls? Inbound Policies to the rescue!

I created a PoC to show how this can be done within APIM. Let’s see how I did it.

Check out the full walk-through on Bryan’s blog.

The post Calling a Helper API in an Azure APIM Inbound Policy appeared first on Developer Support.

]]>
Collect and Automate Diagnostic Actions with Azure App Services https://devblogs.microsoft.com/premier-developer/collect-and-automate-diagnostic-actions-with-azure-app-services/ Wed, 30 Sep 2020 12:12:22 +0000 https://devblogs.microsoft.com/premier-developer/?p=39895 Troubleshooting production systems is often a balance between restoring services quickly and trying to collect enough information to isolate what caused the issue. For complex application issues, it’s almost always helpful to capture a memory dump.

The post Collect and Automate Diagnostic Actions with Azure App Services appeared first on Developer Support.

]]>
Reed Robison shares techniques to collect diagnostic data and automate recovery behavior with Azure App Services.


Troubleshooting production systems is often a balance between restoring services quickly and trying to collect enough information to isolate what caused the issue. For complex application issues, it’s almost always helpful to capture a memory dump. Memory dumps are a snapshot of a process in time and with them, you can see precisely what your app was doing when it experienced a problem. By examining call stacks, you can see exactly what every thread is doing – what they are waiting on, exceptions that were thrown, and sometimes even the data that is responsible for getting it into a bad state. Post-mortem debugging isn’t for everyone, but for the most complex problems it’s how you get answers. Luckily, there are some good tools that automate dump analysis, and you can always call Microsoft for deeper assistance.

The biggest challenge is typically reacting fast enough to capture a dump while the problem is occurring. Once you recycle a process, that data (and opportunity) is gone. Sometimes manual memory dumps are possible but frequently you must automate the process in order to get the data you need.

Azure App Services provides a range of Diagnostic Services to choose from. This post will explore some of the tools available and ways to automate more complex scenarios.

Let’s consider the scenario where a web application instance gets into a “bad” state and is no longer serving requests property. Requests routed to this one instance fail, but the other instances seem to be working fine. The goal is to quickly detect the condition, create a memory dump, and recycle only the instance that is causing problem.

Manual Intervention

You could simply restart the App Service either through the Azure portal or through an automation script. The downside with this approach is that it restarts all the instances, and the impairment will persist until a human gets involved. That’s not ideal for production scenarios. To capture a dump before you restart, you can to navigate to the Diagnose and solve problems in the Azure portal and choose Diagnostic Tools. Choose Collect Memory Dumps, pick a specific instance to dump, then save to a designated storage account for further analysis.

Image memorydump

Trapping a Specific Exception Condition

Frequently, there is a specific exception or condition responsible for getting your app into a bad state. For example, you might see evidence of exceptions that occurred at some point a memory dump or log file, but understanding how you go there means trapping the exception as it occurs. This can get tricky with multiple instances since you may not know which instance the error will occur on. In this scenario, you want to monitor the process for an exception to occur and trigger a memory dump at that moment in time. To do this with App Services, we’ll typically use something like procdumphelper to setup an exception monitor and configure the monitoring rule via Kudo console.

There a good overview of how to set this up here.

Tip – when configuring a rule to dump on a specific exception, you need -g if you are triggering on native exceptions.  If you are triggering on managed exceptions, remove the -g param (will not trigger managed exceptions if this is used).

Automating Rules

App Services allows you to define Auto-Heal rules to automate some types of recovery actions. You can configure these using the Azure portal under Diagnose and solve problems. The list of pre-defined conditions is limited, but it’s easy to use and handy for some common scenarios.

For instance, you can trigger this action based on Request Duration, Memory Limit, Request Count, or a specific Status Code returned from your app. You can choose to Recycle Process, Log an Event, or take a Custom Action (such as creating a memory dump, running a profiler, or even running a specific executable).

Image auotheal

While Auto-Heal rules make it easy to automate against these conditional triggers, you don’t have a lot of additional options to customize them. In the scenario where you need to quickly identify a problem instance, dump it, and restore service, the default conditions (request duration, memory, count, or a status code) might not be enough. If your app could return a specific error code as a response, you could use that as the trigger, but that assumes your app knows it’s in a bad state and has the ability to return a unique status code to serve as a trigger. That may not always be possible.

Another automated option to restore service is Health Check. It allows you to specify a path in your application to ping on a regular interval. The idea here is that if an instance fails to respond to a ping it can automatically be detected as unhealthy and removed from the load balancer. If it remains in an unhealthy state for an extended period of time, it is replaced with a new instance. More details can be found here. It does not (yet) provide any means to debug or dump that problem instance and it doesn’t remove it right away.

If all the above approaches don’t provide the granularity to achieve the goal, you could consider writing your own automation script to control the actions. Azure exposes the ability to setup an alert rule (see Monitoring option in your app service) to trigger off a variety of conditions like Metrics and Logs. If there are characteristics (for example a Handle Count > threshold) that indicates a “known” bad state, you can configure an action group to kick off an Azure Function, Runbook, Logic App, etc., where you could control what happens next. If you can identify a way to trigger some kind of notification, then you can use PowerShell to author your own actions. There are a variety of ways to enumerate resources in your environment and take actions to restart services, instances, and even create memory dumps.

For example, here is a PowerShell script to recycle a role instance of a WebApp. You could use this technique to recycle a specific, problem instance vs. restarting the entire App Service.

I’ll go into details of some approaches of automating memory dumps via PowerShell and Azure REST APIs in the next post. You can use a combination of these techniques to fine tune an automated response to create memory dumps and quickly recycle problem instances.

The post Collect and Automate Diagnostic Actions with Azure App Services appeared first on Developer Support.

]]>
Taste of Premier: Azure Security Center https://devblogs.microsoft.com/premier-developer/taste-of-premier-azure-security-center/ https://devblogs.microsoft.com/premier-developer/taste-of-premier-azure-security-center/#comments Tue, 11 Aug 2020 12:26:47 +0000 https://devblogs.microsoft.com/premier-developer/?p=39783 Security is foundational for Azure. Take advantage of multi-layered security provided across physical data centers, infrastructure, and operations.

The post Taste of Premier: Azure Security Center appeared first on Developer Support.

]]>
Be sure and check out the latest episode of Taste of Premier with Kerinne Brown and Lex Thomas as they Discuss Azure Security Center. Security is foundational for Azure. Take advantage of multi-layered security provided across physical data centers, infrastructure, and operations. Image tasteofpremier

The post Taste of Premier: Azure Security Center appeared first on Developer Support.

]]>
https://devblogs.microsoft.com/premier-developer/taste-of-premier-azure-security-center/feed/ 1
How Microsoft helps customers adopt Azure through developer education https://devblogs.microsoft.com/premier-developer/how-microsoft-helps-customers-adopt-azure-through-developer-education/ Fri, 26 Jun 2020 20:01:40 +0000 https://devblogs.microsoft.com/premier-developer/?p=39616 In this post, we review how Microsoft is helping our customers achieve tech intensity by providing a wide array of Azure learning opportunities to constantly build, maintain and strengthen the cloud capabilities of their developers and IT staff.

The post How Microsoft helps customers adopt Azure through developer education appeared first on Developer Support.

]]>
App Dev Manager Robin Smith discusses Microsoft’s approach to enterprise learning, highlighting key resources and programs for Azure education to help customers empower developers and close the tech skills gap.


Introduction

The pace of change in technology has never been as a fast as it is today and all indications are that this pace will continue to increase. The 2020 paper “Jobs of Tomorrow” published by the World Economic Forum estimates that “fully meeting the labor market demand for emerging professions and skills to meet the needs of the new technological era could add US$11.5 trillion in GDP growth over the next decade”. Furthermore, it’s highly likely that many of the key jobs in Data & AI and Engineering & Cloud Computing ten years from now don’t even exist today!

So how do companies manage this ever-increasing rate of change and all of the challenges that come with it? Microsoft CEO Satya Nadella continuously stresses the importance of “tech intensity”. Satya defines tech intensity as “the potential for companies to jump-start their growth by not just adopting technology, but by building their own technology too”. He further elaborates: “There are two aspects to tech intensity: First, every organization will need to be a fast adopter of best-in-class technology, and equally important, they will need to build their own unique digital capabilities, which starts with workers who are deeply knowledgeable about the latest technology.”

In this post, we review how Microsoft is helping our customers achieve tech intensity by providing a wide array of Azure learning opportunities to constantly build, maintain and strengthen the cloud capabilities of their developers and IT staff.

The Microsoft Approach to Learning

Microsoft is not a newcomer to technical education, but the approach has changed as software has evolved from shrink-wrapped CD’s delivered every few years to cloud services that are updated daily. The training offerings of yesteryear were relatively narrow, focused on specific products and changed slowly. In contrast, the training offerings of today are focused on key roles and skills, and content changes every couple of months to reflect the rapidly changing technology landscape.

Role-based Training

Microsoft works closely with internal and external partners to gather and analyze data on the key roles in modern enterprises large and small. For each of the roles, a Job Task Analysis or JTA is performed. This process results in a master list of skills for the role – i.e. the set of capabilities an individual needs to be highly effective in the role. This is an ongoing process and new roles and skills are added as technology evolves.

Microsoft uses the output of this process to develop role-based training for the roles that it considers critical, both from the perspective of current needs and the perspective of strategically important emerging needs. Role-based training is currently available for these Azure-related roles:

  • Azure Administrator
  • Azure Data Engineer
  • Azure Security Engineer
  • Azure Database Administrator
  • Azure Developer
  • DevOps Engineer
  • Azure Solutions Architect
  • Azure Data Scientist
  • Azure AI Engineer

Fundamentals and Specialty Training

Microsoft also offer a range of fundamentals training courses for those such as IT managers who do not need the depth of role based training. Fundamentals training is also a great starting point for developers who are new to a technical are and need to establish foundational knowledge (e.g. Azure Fundamentals).

At the other end of the spectrum are specialty training courses that cater to the specific needs of projects or applications. A good Skilling Plan (discussed later as part of the Enterprise Skills Initiative) will often combine fundamentals training, role-based training and specialty training into a layered plan, with each stage building on the foundation established in the prior stage.

Many Routes to Success

Another signature aspect of the Microsoft learning approach is the recognition that different people learn differently. To maximize the success of all learners, Microsoft offer a mix of self-paced online learning and instructor-led virtual & in-person training. The training offers the ability for participants to get hands on with labs. Many of the Azure labs are delivered in easy-to-access, free sandbox environments. There is also a focus on interactivity, with frequent quizzes to check knowledge and provide feedback.

Key Resources and Programs

Microsoft offers many resources and programs to help guide customers and their teams on their learning journeys. In sections below, we are going to focus on Microsoft Learn and three key programs: the Cloud Skills Challenge, the Enterprise Skills Initiative and Microsoft Certification.

Microsoft Learn

Microsoft Learn is the cornerstone or “front door” to all training at Microsoft. There are over 1000 self-paced modules currently available with more being added all the time. Microsoft Learn is truly an “a la carte” offering: users can browse individual modules for guidance on a specific subject, or choose a learning path that aligns with a specific role such as Azure Developer. Learning Paths are a logically arranged group of modules that build a well-rounded understanding of a specific area. Many learning paths lead to certifications, covered in a later section of this blog post. Content in each module is presented in written form, video, interactive quizzes and hands-on labs.

Cloud Skills Challenge

The Cloud Skills Challenge adds some fun and friendly competition to learning! Leveraging the learning paths on Microsoft Learn or custom collections of modules, participants can track their learning progress and earn virtual badges and trophies on the way to achieving their learning goals. Compete against your friends for bragging rights, or connect with your Microsoft Application Development Manager, Technical Account Manager or Cloud Solution Architect to set up a team challenge at your company. Team challenges include leaderboards and prizes (availability depends on participation level and other factors) to encourage participation and to measure the skills gained.

Enterprise Skills Initiative

The Enterprise Skills Initiative (ESI) offers a programmatic way for enterprise customers to address knowledge gaps in their organizations and enable rapid, effective adoption of Azure. A Training Program Manager (TPM) is assigned to each customer, and a customized Skilling Plan is generated. The Skilling Plan will often combine self-paced learning resources on Microsoft Learn and Cloud Skill Challenges with virtual training day events and live instructor-led training to achieve specific learning goals. A key aspect of the TPM role is to work with customers to align learning with key initiatives and to track metrics to validate skilling impact on important projects.

The ESI program is available worldwide to selected customers and is continuously evolving to meet enterprise needs. Coming soon to the program are new learner and customer portals to make access and management more efficient, and live online Azure certification exam prep sessions. Contact your Microsoft Training Program Manager, Application Development Manager, or Technical Account Manager to learn more about the Enterprise Skills Initiative, and how your company can benefit from this program.

Certification

Microsoft consistently emphasizes the importance of certification to the learning process, and exams are updated frequently to reflect the latest changes in the underlying technology. Ideally, all of the knowledge gained by learners through Microsoft Learn, the Cloud Skills Challenge and the Enterprise Skills Initiative is validated and rewarded with one or more certificates. Individuals benefit from certification by gaining international recognition as experts in cloud technology and enhancing their professional credentials. Certification also allows IT leadership to quantify the value of their training programs and gain confidence that their teams are ready to leverage the power of Azure.

Conclusion

A culture of continuous learning is a cornerstone of successful companies in today’s world of fast-paced change. Microsoft is committed to helping customers achieve tech intensity and close the tech skills gap by providing a range of quality learning opportunities for developers & IT staff. I hope that this this brief review of key programs and resources for Azure education has been helpful. To get started on your own custom learning journey, check out the resources at Microsoft Learn, sign up for a Cloud Skills Challenge, contact your Microsoft representative about the Enterprise Skills Initiative, and get on the path to certification!

The post How Microsoft helps customers adopt Azure through developer education appeared first on Developer Support.

]]>
Azure Private Link vs. Azure Service Endpoint for App Services https://devblogs.microsoft.com/premier-developer/azure-private-link-vs-azure-service-endpoint-for-app-services/ Wed, 24 Jun 2020 13:00:42 +0000 https://devblogs.microsoft.com/premier-developer/?p=39604 In this post, App Dev Manager Chris Hanna compares Azure Private Links and Azure service Endpoints for App Services.

The post Azure Private Link vs. Azure Service Endpoint for App Services appeared first on Developer Support.

]]>
In this post, App Dev Manager Chris Hanna compares Azure Private Links and Azure service Endpoints for App Services.


Are you trying to determine the best way to secure your website hosted on Azure App Service? Do you want to leverage Azure App Service, but still restrict your site to internal users or your Network?

Background Information: I think it’s safe to say, we all know that in Networking we have two key directions of traffic inbound and outbound. This is no different for an App Service, the reason I bring up this simple concept is because there are different architectural options to handle inbound/ingress and outbound/egress traffic to your app service. Today we will be talking about inbound traffic for your app service. If you are looking for how to connect to resources in your VNET from your App Service, check out VNET Integration.

Continue reading here.

The post Azure Private Link vs. Azure Service Endpoint for App Services appeared first on Developer Support.

]]>
Managing Cloud Ready .NET App Secrets in Visual Studio https://devblogs.microsoft.com/premier-developer/managing-cloud-ready-net-app-secrets-in-visual-studio/ Sun, 31 May 2020 16:00:39 +0000 https://devblogs.microsoft.com/premier-developer/?p=39429 Secrets need to be handled with care and this is the responsibility of every developer. We’ve all read those news reports of someone accidently checking a production password into source control only to find out the next day that their credit card has been maxed out.

The post Managing Cloud Ready .NET App Secrets in Visual Studio appeared first on Developer Support.

]]>
App Dev Manager Keith Beller explores how to handle App Secrets in Visual Studio.


Secrets need to be handled with care and this is the responsibility of every developer. We’ve all read those news reports of someone accidently checking a production password into source control only to find out the next day that their credit card has been maxed out. Obviously, no one wants to be “that person,” so today we are going to provide a few tips that will help you maintain your app secrets like a professional.

The .Net Core documentation team has posted several excellent articles on secret management that I would encourage you to read. There is no need to rehash all the content included in the articles; instead, I’d like to highlight some of the options you may want to consider as you configure your secrets management workflow.

  1. Safe storage of app secrets in development in ASP.NET Core
  2. Azure Key Vault Configuration Provider in ASP.NET Core

In summary, there are a few basic options for storing and retrieving sensitive data.

Image secretoptions

How secrets are retrieved from your development and production environments is a function of your secret’s management workflow. The graphic below depicts a simplified workflow that demonstrates the most common options. The critical point here is that your workflow does not persist secrets to source control. Additionally, and for completeness, you should not store secrets in a database. Also, we’ll not discuss using Active Directory, but it may be an option if you’re team is working in a Windows context.

Image secret workflow

No matter what secrets management option you select, there are general principles that should always be followed. First, development secrets should be unique and should not be shared with upper environments like QA or Prod. Next, you should limit access to production secrets to those that need to know them. Establish a rotation policy for production secrets. Finally, don’t store unencrypted secrets in production. As the workflow illustrates, if you use Active Directory in Development then you would use Active Directory in Production. This option may work well for resources like MS SQL Database. Another option is Environment Variables. If you’re working in the cloud or a hybrid environment, you can pair Secrets Manager with Azure Key Vault. When working in the cloud, this is the preferred option since Key Vault ensures production secrets remain encrypted.

Environment Variables

As we point out in the Protect Secrets in Development article linked above, if you decide to go the Environment Variable route please be aware that they’re generally stored in plain, unencrypted text. If the machine or process is compromised, environment variables can be accessed by untrusted parties. Additional measures to prevent disclosure of user secrets may be required, so considerations should be evaluated with your team as you decide how you’d like to set up your workflow. If you decide to proceed with this option, all you’ll need to do is set up environment variables for both your development and production environments. Then you’ll need to make sure your production variables are encrypted.

Secret Manager + Azure Key Vault

One feature that developers may not be aware of is something called Secrets Manager. This is available in the latest version of Visual Studio and pairs nicely with Azure Key Vault. You can access “Manage User Secrets” by right clicking the project in the Solution Explorer. The cool thing about this option is that it creates a file that is uniquely linked to your project, but is not found in the source code directory hierarchy so there is no way to accidently check secrets into source. The secrets are stored in a secrets.json file which is stored in the C:\Users\<UserProfile>\AppData\Roaming\Microsoft\UserSecrets\<Guid> directory on your development machine.

Image secret publish

Once configured, secrets can be retrieved in your application Startup.cs class via the Configuration API. From there you can pass them via injection to your ASP.Net controller or some other context. When you’re ready to release to the cloud all you have to do is make sure you’ve set up your Azure Key Vault secret with a matching name and granted access to your application. For more details see the Azure Key Vault Configuration Provider article linked above.

Taking time to define a Secrets Management Workflow is relatively simple and will protect your application secrets from accidental exposure.

The post Managing Cloud Ready .NET App Secrets in Visual Studio appeared first on Developer Support.

]]>