Skip to main content

What Are AI Services?

AI Services are specific AI model instances that you configure for use in your workflows. While AI Providers establish connections to external AI platforms, AI Services define the actual models, settings, and configurations that power your AI features.
Prerequisites: You must have at least one AI Provider configured before creating AI Services. See the AI Overview for setup instructions.
Required Permission:

Types of AI Services

Elementum supports two types of AI Services:
  • LLM Services — Language models for text generation, conversation, and analysis. Used for agents, automation actions, and content generation.
  • Embedding Services — Embedding models for semantic search and similarity analysis. Used in AI Search to convert data into vector representations for semantic querying.

Create AI Services

Access AI Services

  1. Click the Settings icon, then select AI Services under the Platform section of the menu
  2. Click + Service to open the service creation dialog
  3. Choose a provider: OpenAI, Snowflake, or Gemini

Create an LLM Service

LLM Services power conversational AI, text generation, and intelligent automation.
Service Name: Give your service a descriptive name (e.g., “Customer Support Bot”)Provider: Select your configured AI ProviderModel: Choose from available models for your provider. See AI Models for a detailed comparison of capabilities, use cases, and pricing considerations across all providers.Cost Per Million Tokens: Optional cost tracking (varies by provider)

Create an Embedding Service

Embedding Services enable AI Search and semantic understanding.
Service Name: Descriptive name (e.g., “Document Search Embeddings”)Provider: Select your configured AI ProviderModel: Choose from available embedding models:
  • Snowflake Arctic L V2.0 — Latest high-quality embeddings
  • Snowflake Arctic L V1.5 — Reliable embeddings for production use
Dimensions: Embedding vector size (varies by model)

Test Services

Before using AI Services in production, test them from the Services list:
  1. Click on a service name to open the testing interface
  2. For LLM Services: enter sample prompts, review AI-generated responses, adjust parameters, and monitor response times
  3. For Embedding Services: enter sample text, review generated embedding vectors, and test similarity calculations between texts

Manage and Optimize

Once your services are created and tested, keep the following in mind:
  • Model selection — The right model depends on your use case. For recommendations by task type (agents, classification, content generation, semantic search) and guidance on balancing cost and performance, see the Model Selection Guide.
  • Cost optimization — Right-size your model choices, write concise prompts, and set appropriate token limits to control spending. See Cost Optimization for detailed strategies.
  • Multiple providers — You can configure services across different providers for redundancy or to use different model strengths for different tasks. See AI Providers for setup details.
  • Feature-specific guidance — For details on how AI Services integrate with specific capabilities, see AI in Automations for automation actions, Building Agents for conversational agents, and AI Search for embedding-powered search.

Troubleshooting

Symptoms: Cannot create new AI servicesCommon Causes:
  • AI Provider not configured
  • Invalid model selection
  • Insufficient permissions
Solutions:
  1. Verify AI Provider is properly configured
  2. Check model availability for your provider
  3. Ensure proper permissions are granted
  4. Try creating with different model options
Symptoms: Slow response times or quality issuesCommon Causes:
  • Inappropriate model selection
  • Suboptimal configuration
  • Network or provider issues
Solutions:
  1. Review model selection for your use case
  2. Optimize service configuration settings
  3. Check provider status and network connectivity
  4. Consider switching to different models
Symptoms: Unexpected high token usage or costsCommon Causes:
  • Inefficient prompts or queries
  • Inappropriate model selection
  • Excessive API calls
Solutions:
  1. Review and optimize prompts
  2. Use more cost-effective models where appropriate
  3. Implement caching and batching
  4. Monitor and analyze usage patterns

Next Steps

AI Models

Compare models across providers to choose the right one for your use case

Enable AI Search

Use embedding services to power semantic search across your data

Build Agents

Create conversational AI assistants using your LLM services

AI in Automations

Add AI-driven actions to your automation workflows