Azure OpenAI Security: What You Need to Know


Microsoft

You may have seen in the news that many companies are starting to limit and control how their employees use OpenAI. The main reason is privacy and data security concerns.👁‍🗨

Whilst many will halter usage and wait for governance to catch up, others will shift to run self hosted instances. Self-managed instances come with more control and provide a “safer” environment for enterprises. The problem with this, is the overhead and management of fully self-managed instances. Endless headaches of how to run, cost, design, govern, security, and usage. To avoid all this, most will lean towards services such as Azure OpenAi.

Here, Azure has done the legwork for you, so you can just spin up an instance and pay as you go.

The Register

In most cases, those wanting to use Azure OpenAI will already have a footprint in Azure, or M365. The trust is therefore default as they host all your emails and data. You should still read the small print though!


🌍Open Systems to Open System

As with most Microsoft Azure products, the defaults lean towards usability instead of security. If you move from OpenAI ChatGPT to Azure OpenAI, both are open to the public by default.

By default, your instance is public-facing 🔥. Whilst not too big of a deal for new instances, you may want to limit this under the networking tab:

Remember, with this enabled, anyone can “Attempt” to access your instance. Attempt is the bold word. There is a big difference in Security with risk appetite. In some cases, the ability to attempt access is enough.

The endpoint follows the URI of: https://[your instance].openai.azure.com

Similar to the static storage account endpoints, these can be easily enumerated.

If an attacker runs recon, they will hit a 404 error if the instance is correct. Everything else will reply unknown.


🔑 Key Management

There is documentation on how to authenticate using AzureAD Tokens, however, most will “play” with keys. When a new instance is created, similar to Storage Accounts, you are granted two keys. These keys can be used to interact with your deployment.

If you are going to lean towards keys, do develop a key management process to ensure security. If you have restricted network access to the resource, the risk is reduced however as we endlessly see in the news, attackers get in. You should still apply good posture management on your keys and rotate them frequently. This can reduce the risk should developers hard-code them in scripts.

Rotation will also brings hard-coded secrets to light as often users shout once something is broken. 💔


🌲Logging & Monitoring

When working in Azure, the normal go-to is Diagnostic Settings > Enable Audit and ship to storage or log analytics workspace. In this instance, it may not be enough.

From research, I can see that many flag diagnostic settings may not give enough. This again falls down to risk appetite. You have to think to yourself… It’s a Friday night, you are confident that someone has breached the system, and you have people at the top asking you questions. Can you answer:

What happened, Who did it, When did they do it, and from Where?

That questions isn’t taking into account if you are on the service desk, architecture team, project manager or CISO. It doesn’t care.

If you have some form of responsibility, you need to be able to answer them yourself or obtain the answers from someone else. If you or they can’t; you don’t have the correct logging.

I can see the majority of “designs” match that of Microsoft’s concept:

Here, they have funneled in requests via the API Management Services to gain a greater level of logging and Security. Often networking tools will log more information, which may be critical during incident response.

Whilst not all steps apply to you, it’s worth running through basic scenarios. Have person 1 run a query but not tell person 2. Have person 2 then look through the logs. If person 2 cannot tell person 1 what they ran, you may need to improve your logging.

Ref: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/ai/log-monitor-azure-openai

Another more in-depth read is on their GitHub: https://github.com/Azure-Samples/openai-python-enterprise-logging


✅Conditional Access

It seems that Microsoft has separated the OpenAI Studio into its own application (Azure OpenAI Studio — as seen in sign-in logs). With this, you can bolster Security by implementing additional rules for access.

Remember, that network restrictions do not limit the who and what. Nor does key and Auth tokens. It’s the conditional access policy that can help you limit access based on device and user posture.

This is pending a question with MS as I don’t see it under their cloud apps, but can in their Sign-In logs. Similar to Powershell MGMT, I’m wondering if it will be separated or included in Azure management app.


Microsoft has also done some basic documentation on Red Teaming large language models (LLMs) which may be worth a read: link.

As AI usage grows in enterprises, more insight will be developed to help us better secure them. I think it’s important to take into account that this is still very early doors and there is no “Expert” or “Leading company” in this field. Everyone has different approaches, and some are better than others.

Whilst others are treading lightly, Microsoft seems to be hammering AI into all products. My advice to you is to just take a minute to understand what you are doing. If it’s in an isolated environment with no company data, by all means, innovate. The moment non-pubic data is uploaded, hit the brakes. Technical debt in “innovation tools” are harder to fix as it will feel like you are caging rather than securing. Bake it in from the start and friction will be reduced. 🍰

Advertisement

Create a website or blog at WordPress.com

%d bloggers like this: