AI Embeddings
Made Simple
Transform your data into high-quality vector embeddings using state-of-the-art open-weight models. One package, multiple models, infinite possibilities.
from amde import Amde
# Initialize client
client = Amde(api_key="api_key")
# Generate embeddings
resp = client.embed(
model="sentence-transformers/all-MiniLM-L6-v2",
input_data="Your text data here")
# Use embeddings
print(resp.embedding)Supported Models
Choose from the best open-weight embedding models
MiniLM-L6
Sentence Transformers
NoInstruct-Small
avsolatorio
MobileNetV3
YAMNet
ONNX
Built for Scale
Lightning Fast
Sub-100ms response times with global edge deployment and intelligent caching.
Batch Processing
Process thousands of documents simultaneously with our optimized batch endpoints.
Developer First
Comprehensive SDKs, and detailed documentation for rapid integration.
Enterprise Security
HIPAA compliant with end-to-end encryption and zero data retention policies.
Real-time Analytics
Monitor usage, track performance, and optimize costs with detailed analytics.
Vector Integration
Direct integration with Pinecone, Weaviate, Qdrant, and other vector databases.
Simple API Pricing
Pay only for what you use. No hidden fees.
FREE
Perfect for testing and small experiments.
- 10K embeddings/month
- All models available
- Community support
- Rate limit: 100 RPM
PRO
Ideal for production-level apps and teams.
- Pay per embedding
- Unlimited requests
- Priority support
- 99.9% SLA
- Batch processing
ENTERPRISE
Best for large-scale enterprise integrations.
- Volume discounts
- Custom models
- Dedicated support
- SLA guarantee
- On-premise deployment