Skip to main content

Overview

OpenAI provides language models including o3, o4-mini, and GPT-4 Omni. This guide walks you through setting up OpenAI as an AI Provider in Elementum.
Prerequisites: You’ll need an OpenAI account with API access. Individual accounts and organization accounts are both supported.

Step 1: Get Your OpenAI API Key

Create an OpenAI Account

  1. Visit OpenAI Platform
  2. Set Up Billing
    • Navigate to SettingsBilling
    • Add a payment method to enable API access
    • Consider setting up usage limits to control costs

Generate Your API Key

1

Access API Keys

In your OpenAI dashboard, navigate to API Keys in the left sidebar
2

Create New Key

Click Create new secret keyGive your key a descriptive name like “Elementum Integration”
3

Copy and Store

Critical: Copy the API key immediately and store it securelyYou won’t be able to view it again after closing the dialog
4

Set Permissions (Optional)

If using an organization account, you can set specific permissions for the keyEnsure the key has access to the models you plan to use
Never share your API key or commit it to version control. Store it in a secure location like a password manager.

Step 2: Configure OpenAI in Elementum

Add the Provider

  1. In Elementum, go to Organization Settings and select the Providers tab
  2. Click + Provider and select OpenAI from the provider options
  3. Configure the provider settings:
Provider Name: Enter a descriptive name (e.g., “OpenAI Production”)API Key: Paste your OpenAI API keyEndpoint URL (Optional): Custom endpoint URL for Azure OpenAI or other OpenAI-compatible APIs. Leave blank to use the default OpenAI API endpoint.CloudLink: Select which CloudLinks can access models from this provider. Leave as “All CloudLinks” unless you need to restrict access.
  1. Click Save to create the provider. Elementum will automatically validate your API key — look for a green checkmark indicating a successful connection.

Step 3: Review Available Models

Once your provider is connected, the following OpenAI models are available for use in AI Services:

Language Models (LLMs)

ModelPrimary Use CaseSpeedIntelligenceBest For
o3Complex reasoning and researchLowVery HighAdvanced problem-solving, research tasks, complex analysis
o4-miniFast, efficient reasoningVery HighHighDaily tasks, customer support, quick analysis
GPT-4 OmniBalanced performanceHighHighContent creation, detailed analysis, conversational AI
Model Recommendations: Use o4-mini for most daily tasks and customer interactions. Choose o3 for complex reasoning that requires the highest intelligence. GPT-4 Omni provides a good balance for general-purpose applications.
Note: Embeddings for AI Search are handled exclusively through Snowflake Cortex. OpenAI models are used for LLM services only.

Step 4: Create Your First AI Service

With your OpenAI provider configured, create an AI Service:
  1. In Organization Settings, go to the Services tab
  2. Click + Service and select LLM (Language Model service). Configure the service name, select from available OpenAI models, and optionally set cost per million tokens for tracking.
  3. Use the built-in testing interface to verify your service works correctly

Usage Guidelines

Cost Management

OpenAI charges based on token usage. To manage costs:
  • Monitor usage in the OpenAI dashboard
  • Set up billing alerts
  • Review token consumption regularly

Best Practices

  • Use o4-mini for most customer support and daily automation tasks
  • Use o3 for complex reasoning, research, and advanced problem-solving
  • Use GPT-4 Omni for content creation and detailed analysis
  • Be specific and clear in your prompts
  • Use system messages for consistent behavior
  • Provide examples for better results
  • For o3, structure complex problems step-by-step
  • Choose o4-mini for speed-critical applications
  • Use o3 sparingly for tasks requiring maximum intelligence
  • Implement caching for repeated queries
  • Consider request queuing for high-volume usage

Troubleshooting

Symptoms: API key rejected or unauthorized errorsCommon Causes:
  • Invalid or expired API key
  • Insufficient permissions
  • Billing issues
Solutions:
  1. Verify API key is correct and active
  2. Check billing status and payment methods
  3. Ensure key has proper permissions
  4. Regenerate API key if needed
Symptoms: Desired model doesn’t appear in service creationCommon Causes:
  • Account doesn’t have access to specific models
  • Regional restrictions
  • Model deprecation
Solutions:
  1. Check OpenAI account tier and access levels
  2. Review model availability in your region
  3. Contact OpenAI support for access issues
  4. Consider alternative models
Symptoms: Requests being throttled or rejectedCommon Causes:
  • Exceeding rate limits
  • High concurrent usage
  • Account tier limitations
Solutions:
  1. Implement exponential backoff
  2. Reduce request frequency
  3. Upgrade account tier if needed
  4. Distribute load across multiple keys

Security Considerations

  • Never expose API keys in client-side code
  • Rotate keys regularly
  • Use environment variables for storage
  • Monitor key usage for anomalies

Advanced Configuration

Organization Accounts

If you’re using an OpenAI organization account, you can centralize API access, billing, and team permissions under a single organization:
  • Organization ID: Required for organization accounts
  • Member Management: Control team access through OpenAI dashboard
  • Usage Tracking: Monitor usage across team members
  • Billing Management: Centralized billing for the organization

Custom Endpoints

If you use Azure OpenAI or another OpenAI-compatible API, you can point your provider at a custom endpoint instead of the default OpenAI API:
  • Endpoint URL: Enter your custom endpoint (e.g., https://your-resource.openai.azure.com/)
  • Authentication: May require additional authentication headers depending on the endpoint
  • Model Names: Custom model names may be required for non-standard endpoints
  • Rate Limits: May differ from standard OpenAI limits

Next Steps

With OpenAI configured as your AI Provider:

Create AI Services

Set up specific LLM and embedding services for your workflows

Configure Snowflake Cortex

Set up Snowflake Cortex for AI Search and embeddings

Build Agents

Create conversational AI assistants using OpenAI models

Use AI Actions

Add AI capabilities to your automation workflows