| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| 712:configure_stages [2026/02/03 11:30] – Prinz, Patrick | 712:configure_stages [2026/02/13 14:11] (current) – Prinz, Patrick |
|---|
| <cg-property name="isEmbeddingModel" value="true"/> | <cg-property name="isEmbeddingModel" value="true"/> |
| </cg-host> | </cg-host> |
| | </cg-type> |
| | </cg> |
| | </code> |
| | |
| | AI translate feature configuration: |
| | |
| | **Version** |
| | |
| | The API of the AI Translator has the version 3.0 and will be set like following: <cg-property name="version" value="3.0"/> |
| | |
| | **Request Type** |
| | |
| | The request type differs depending on the type of connection you’re using to the service. |
| | |
| | __For globally accessible services the configuration is:__ ''<cg-property name="request_type" value="translate"/>'' |
| | |
| | __For virtual network access (mind the v in front of 3.0):__ ''<cg-property name="request_type" value="translator/text/v3.0/translate"/>'' |
| | |
| | **Region** |
| | |
| | The region is an optional header. If the region property is not set, the translate service requests a translation from the closest available data centre. As we provide our services in Europe and USA only, there are just a few possible combinations for our configuration. ''<cg-property name="region" value="eastus"/>'' |
| | |
| | <code -> |
| | <cg> |
| | <cg-type name="other_systems_go_here"> |
| </cg-type> | </cg-type> |
| <cg-type name="microsoftTranslateService"> | <cg-type name="microsoftTranslateService"> |
| <cg-property name="version" value="3.0"/> | <cg-property name="version" value="3.0"/> |
| <cg-property name="request_type" value="translate"/> | <cg-property name="request_type" value="translate"/> |
| | <!-- if using a virtual network configuration the request type has to be like following --> |
| | <!-- <cg-property name="request_type" value="translator/text/v3.0/translate"/> --> |
| </cg-host> | </cg-host> |
| </cg-type> | </cg-type> |
| ===== AI Feature Configuration: How to create Azure OpenAI services ===== | ===== AI Feature Configuration: How to create Azure OpenAI services ===== |
| |
| 1. Login to Azure-Management-Portal and navigate to "Azure OpenAI" service:\\ \\ {{ :712:create_azure_openai_services_01.jpg?400&direct }}\\ 2. Create a new service within the OpenAI service:\\ \\ {{ :712:create_azure_openai_services_02.jpg?400&direct }}\\ 3. Define basic information for the resource - e.g. select region:\\ \\ {{ :712:create_azure_openai_services_03.jpg?400&direct }}\\ 4. Create resource: click "Create":\\ \\ {{ :712:create_azure_openai_services_04.jpg?400&direct }}\\ 5. Select resource and navigate to "AI Foundry portal". Required AI models need to be configured here:\\ \\ {{ :712:create_azure_openai_services_05.jpg?400&direct }}\\ 6. Within "AI foundry portal" these three models need to be deployed:\\ \\ {{ :712:create_azure_openai_services_06.jpg?400&direct }}\\ {{ :712:create_azure_openai_services_07.jpg?400&direct }}\\ 7. Within the resource navigate to "Keys and Endpoints":\\ \\ {{ :712:create_azure_openai_services_08.jpg?200&direct }}\\ 8. Here the required access data for Stages can be exported (URL, Key):\\ \\ {{ :712:create_azure_openai_services_09.jpg?400&direct }}\\ 9. Afterwards create another resource for the translator-services:\\ \\ {{ :712:create_azure_openai_services_10.jpg?400&direct }}\\ {{ :712:create_azure_openai_services_11.jpg?400&direct }}\\ 10. Define name and region:\\ \\ {{ :712:create_azure_openai_services_12.jpg?400&direct }}\\ 11. Leave system-identitiy-switch turned "off":\\ \\ {{ :712:create_azure_openai_services_13.jpg?400&direct }}\\ 12. Create resource: click "Create":\\ \\ {{ :712:create_azure_openai_services_14.jpg?400&direct }}\\ 13. Export Stages access data (Key, Endpoint):\\ \\ {{ :712:create_azure_openai_services_15.jpg?400&direct }}\\ How to test if given credentials work? Please execute the command below in a command shell: | 1. Login to Azure-Management-Portal and navigate to "Azure OpenAI" service:\\ \\ {{ :712:create_azure_openai_services_01.jpg?400&direct }}\\ 2. Create a new service within the OpenAI service:\\ \\ {{ :712:create_azure_openai_services_02.jpg?400&direct }}\\ 3. Define basic information for the resource - e.g. select region:\\ \\ {{ :712:create_azure_openai_services_03.jpg?400&direct }}\\ 4. Create resource: click "Create":\\ \\ {{ :712:create_azure_openai_services_04.jpg?400&direct }}\\ 5. Select resource and navigate to "AI Foundry portal". Required AI models need to be configured here:\\ \\ {{ :712:create_azure_openai_services_05.jpg?400&direct }}\\ 6. Within "AI foundry portal" these three models need to be deployed:\\ \\ {{ :712:create_azure_openai_services_06.jpg?400&direct }}\\ {{ :712:create_azure_openai_services_07.jpg?400&direct }}\\ 7. Within the resource navigate to "Keys and Endpoints":\\ \\ {{ :712:create_azure_openai_services_08.jpg?200&direct }}\\ 8. Here the required access data for Stages can be exported (URL, Key):\\ \\ {{ :712:create_azure_openai_services_09.jpg?400&direct }}\\ 9. Afterwards create another resource for the translator-services:\\ \\ {{ :712:create_azure_openai_services_10.jpg?400&direct }}\\ {{ :712:create_azure_openai_services_11.jpg?400&direct }}\\ 10. Define name and region:\\ \\ {{ :712:create_azure_openai_services_12.jpg?400&direct }}\\ 11. Leave system-identitiy-switch turned "off":\\ \\ {{ :712:create_azure_openai_services_13.jpg?400&direct }}\\ 12. Create resource: click "Create":\\ \\ {{ :712:create_azure_openai_services_14.jpg?400&direct }}\\ 13. Export Stages access data (Key, Endpoint):\\ \\ {{ :712:create_azure_openai_services_15.jpg?400&direct }}\\ How to test if given credentials work? Please execute the command below in a command shell on the server that Stages is installed on: |
| |
| curl -X POST 'https://openai-methodpark-prod-msc-plc.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2025-01-01-preview' -H 'Content-Type: application/json' -H 'api-key: xxxxxxx' -d '{ "messages": [ { "role": "user", "content": "Hello" } ], "max_tokens": 1 }' | curl -X POST 'https://openai-methodpark-prod-msc-plc.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2025-01-01-preview' -H 'Content-Type: application/json' -H 'api-key: xxxxxxx' -d '{ "messages": [ { "role": "user", "content": "Hello" } ], "max_tokens": 1 }' |
| |
| <code -> | <code -> |
| <cg-host ident="chatModel" url="${ai.model.url}" displayName="Azure OpenAI GTP-4o"> | <cg-host ident="chatModel" url="${ai.model.url}" displayName="dummy_display_name"> |
| <cg-property name="user" value="${ai.security.user}"/> | <cg-property name="user" ... /> |
| <cg-property name="key" value="${ai.model.key}"/> | <cg-property name="key" .../> |
| <cg-property name="deployment_name" value="gpt-4o"/> | <cg-property name="deployment_name" ... /> |
| | |
| <cg-property name="proxy" value="<proxy_ident>" /> | <cg-property name="proxy" value="<proxy_ident>" /> |
| <cg-host/> | <cg-host/> |