Overview
Snowflake Cortex brings AI capabilities directly to your data warehouse, allowing you to run LLMs and embedding models on your data without moving it outside your secure environment. This guide walks you through setting up Snowflake Cortex as an AI Provider in Elementum.Step 1: Verify CloudLink Prerequisites
Before setting up Snowflake Cortex, ensure your CloudLink is properly configured:- Go to Organization Settings → Cloud Credentials and verify your Snowflake connection is active
- Confirm your CloudLink uses key-pair authentication — if using password authentication, you’ll need to upgrade to key-pair
Snowflake Account Requirements
Your Snowflake account must meet the following requirements for Cortex AI access:- Snowflake Edition: Enterprise or higher
- Cortex Features: Enabled and available in your region (most AWS, Azure, and GCP regions are supported)
- Permissions: USAGE privileges on Cortex functions for your service account
- Billing: Cortex usage is billed through your Snowflake account
Step 2: Configure Snowflake Cortex in Elementum
When you have a CloudLink connection with key-pair authentication, Elementum automatically discovers available Snowflake Cortex capabilities.Add the Provider
- In Elementum, go to Organization Settings and select the Providers tab
- Click + Provider and select Snowflake — you’ll see your existing CloudLink connections listed
- Configure the provider settings:
- Basic Configuration
- Manual Configuration
Provider Name: Enter a descriptive name (e.g., “Snowflake Cortex AI”)CloudLink: Select your key-pair authenticated CloudLinkService Account Credentials: Auto-populated from your CloudLink
- Click Save to create the provider. Elementum will automatically validate your connection and discover available models — look for a green checkmark indicating a successful connection.
Step 3: Review Available Models
Once your provider is connected, the following Snowflake Cortex models are available for use in AI Services.Language Models (LLMs)
| Model | Primary Use Case | Speed | Intelligence | Best For |
|---|---|---|---|---|
| Claude Sonnet 4 | Advanced reasoning and analysis | Moderate | Very High | Complex problem-solving, detailed analysis, premium applications |
| Claude 3.7 Sonnet | Cost-effective reasoning | High | High | Daily tasks, customer support, balanced performance |
| Claude Opus 4 | Most complex reasoning | Low | Very High | Extremely complex tasks, research, advanced analysis (expensive) |
| Mistral Large 2 | European AI compliance | Moderate | High | European regulations, multilingual tasks |
Model Recommendations: Use Claude 3.7 Sonnet for most daily tasks and cost-effective operations. Choose Claude Sonnet 4 for advanced reasoning and premium applications. Reserve Claude Opus 4 for the most complex tasks that require maximum intelligence (note: significantly higher cost).
Embedding Models
| Model | Primary Use Case | Speed | Quality | Best For |
|---|---|---|---|---|
| Snowflake Arctic L V2.0 | Latest high-quality embeddings | High | Very High | Modern search applications, premium AI Search |
| Snowflake Arctic L V1.5 | Reliable embeddings | High | High | Stable search applications, production use |
Embedding Recommendations: Use Snowflake Arctic L V2.0 for new implementations and highest quality search results. Arctic L V1.5 provides reliable performance for production workloads.
Model Availability: Available models depend on your Snowflake account tier, region, and current Cortex offerings. Model selection may vary over time.
Step 4: Create Your First AI Service
With your Snowflake Cortex provider configured, create an AI Service:- In Organization Settings, go to the Services tab
- Click + Service and select the service type:
- LLM (Language Model service) — select from available Cortex language models, configure a service name, and optionally set cost per million tokens for tracking
- Embedding service — for AI Search capabilities, select an embedding model like Snowflake Arctic L V2.0
- Use the built-in testing interface to verify your service works correctly
Usage Guidelines
Cost Management
Snowflake Cortex usage is billed through your Snowflake account. To manage costs:- Monitor Usage
- Optimize Usage
- Monitor Cortex function usage in the Snowflake console
- Track warehouse usage for AI workloads
- Set up Snowflake resource monitors and billing alerts
- Review token consumption regularly
Best Practices
Model Selection
Model Selection
- Use Claude 3.7 Sonnet for most daily automation and customer support tasks
- Use Claude Sonnet 4 for advanced reasoning and premium applications
- Reserve Claude Opus 4 for the most complex tasks requiring maximum intelligence
- Use Mistral Large 2 for European regulatory compliance and multilingual tasks
Prompt Engineering
Prompt Engineering
- Be specific and clear in your prompts
- Use system messages for consistent behavior
- Provide examples for better results
- Structure complex problems step-by-step for reasoning models
Performance Optimization
Performance Optimization
- Scale warehouses based on model complexity and concurrent usage
- Enable auto-scaling for variable workloads
- Choose models appropriate for the task complexity — avoid over-provisioning
- Implement result caching for repeated queries
Troubleshooting
Cortex Functions Not Available
Cortex Functions Not Available
Symptoms: Cannot access Snowflake Cortex AI functionsCommon Causes:
- Using password authentication instead of key-pair
- Insufficient permissions on Cortex functions
- Account doesn’t have Cortex access
- Verify key-pair authentication is configured on your CloudLink
- Check USAGE privileges on Cortex functions
- Contact Snowflake support for account access
- Verify account edition (Enterprise or higher) and region support
Model Discovery Issues
Model Discovery Issues
Symptoms: Expected models don’t appear in service creationCommon Causes:
- Regional model availability
- Account tier limitations
- CloudLink connection issues
- Verify CloudLink connection is active
- Check regional model availability in Snowflake documentation
- Review account tier and permissions
- Refresh provider configuration
Performance Issues
Performance Issues
Symptoms: Slow AI response times or timeoutsCommon Causes:
- Undersized warehouse for AI workloads
- Inefficient query patterns
- Large data volumes
- Scale up warehouse size
- Optimize data queries
- Implement result caching
- Consider dedicated warehouses for AI workloads
Security Considerations
- Data Residency
- Credential Management
Snowflake Cortex runs AI directly on your data warehouse, which provides key security advantages:
- Data never leaves your Snowflake environment
- Maintains existing data governance and compliance policies
- Leverages Snowflake’s built-in security model and encryption
- All access is auditable through Snowflake’s audit logging
Next Steps
With Snowflake Cortex configured as your AI Provider:Create AI Services
Set up specific LLM and embedding services using Cortex models
Enable AI Search
Use Snowflake embeddings for intelligent search on your data
Build Agents
Create agents that can directly access your Snowflake data
Connect Cortex Agents
Integrate Snowflake Cortex Agents into your Apps