Skip to main content

Adding Models

This section explains the steps to add Azure AI Foundry models and configure the required access controls.
1

Navigate to Azure AI Foundry Models in AI Gateway

From the TrueFoundry dashboard, navigate to AI Gateway > Models and select Azure AI Foundry.
Navigating to Azure AI Foundry Provider Account in AI Gateway
2

Add Azure AI Foundry Account Details

Click Add Azure AI Foundry Account. Give a unique name to your Azure AI Foundry account. Add collaborators to your account. You can read more about access control here.
Authentication (API Key or Certificate) is configured per-model in the next step, not at the account level.
Azure AI Foundry account configuration form with collaborator fields
3

Add Models from Azure AI Foundry

Click on + Add Model to open the model form. For Azure AI Foundry, you add models based on your deployments in Azure. First, ensure you have deployed a model in your Azure AI Foundry project. You can follow Microsoft’s instructions here.Once deployed, navigate to your deployment in the Azure AI Foundry portal to find the Target URI (endpoint URL), Deployment Name, and API Key.
Azure portal showing deployed model Target URI, deployment name, and API key
Fill in the model form with the following details:
FieldWhat to EnterWhere to Find in Azure
Display Namename to identify this model in TrueFoundryYour choice (e.g., gpt-4o-mini)
Azure Deployment NameThe deployment name from Azure AI Foundry (not the base model name)Deployments → click deployment → Name field
Azure EndpointThe full endpoint URL excluding the API path and query parameters (see below)Deployments → click deployment → Target URI
AuthenticationAPI Key or Certificate-based authDeployments → click deployment → Key
Model TypesSelect the capabilities of this model (Chat, Embedding, etc.)Based on the model you deployed

Setting the Azure Endpoint

Copy the Target URI from your deployment in the Azure AI Foundry portal, but remove the API path and query parameters, gateway appends those automatically.
Model typeAzure gives youEnter in TrueFoundry
Most models (OpenAI, Mistral, DeepSeek, Meta, Cohere, etc.)https://<resource>.services.ai.azure.com/models/chat/completions?api-version=...https://<resource>.services.ai.azure.com/models
Anthropic (Claude)https://<resource>.services.ai.azure.com/anthropic/v1/messages?api-version=...https://<resource>.services.ai.azure.com/anthropic
Model addition form for Azure AI Foundry with fields for model name, endpoint URL, and authentication
Azure AI Foundry integration supports various AI models including OpenAI, Meta Llama, Mistral, DeepSeek, Cohere, and Anthropic Claude models deployed in your Azure account.

Inference

After adding the models, you can perform inference using an OpenAI-compatible API via the Playground or by integrating with your own application.
Code Snippet and Try in Playgroud Buttons for each model

Mistral OCR - Document Processing

Extract text from documents while maintaining structure and formatting (headers, paragraphs, lists, tables) using the Mistral OCR model via Azure AI Foundry. Returns markdown format and supports multiple formats including PDFs, images (png, jpeg, avif), and office documents (pptx, docx).
This endpoint cannot be used via the Mistral SDK. The Mistral SDK automatically appends /v1 to the base URL, which causes a URL mismatch (e.g. the request is sent to <base_url>/v1/ocr instead of <base_url>/ocr). Use direct HTTP requests (e.g. requests in Python) as shown below. See the open GitHub issue for details.
import base64
import requests
import json

def encode_pdf(pdf_path):
    with open(pdf_path, "rb") as pdf_file:
        return base64.b64encode(pdf_file.read()).decode("utf-8")

pdf_path = "path-to-your-pdf"
base64_pdf = encode_pdf(pdf_path)

url = "https://{controlPlaneUrl}/api/llm/proxy/ocr"

headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer your-truefoundry-api-key"
}

# Use your TrueFoundry Azure Foundry model name (e.g. azure-foundry/mistral-ocr)
payload = {
    "model": "azure-foundry/mistral-ocr",
    "document": {
        "type": "document_url",
        "document_url": f"data:application/pdf;base64,{base64_pdf}"
    },
    "include_image_base64": True
}

response = requests.post(url, headers=headers, json=payload)

if response.status_code == 200:
    # Save output to file
    with open("ocr_output.json", "w") as f:
        json.dump(response.json(), f, indent=2)
    print("OCR output saved to ocr_output.json")
else:
    print(f"Error: {response.status_code}")
    print(response.text)