This commit is contained in:
Gal 2025-07-15 23:52:11 +02:00
commit 34812d6079
Signed by: gal
GPG Key ID: F035BC65003BC00B
48 changed files with 13609 additions and 0 deletions

12
.env.example Normal file
View File

@ -0,0 +1,12 @@
# Backend API Configuration
VITE_API_BASE_URL=http://localhost:8000
VITE_WS_BASE_URL=ws://localhost:8000
# Optional: Development Configuration
VITE_DEV_MODE=true
VITE_LOG_LEVEL=info
# Optional: Feature Flags
VITE_ENABLE_SPEECH_FEATURES=true
VITE_ENABLE_AI_CHAT=true
VITE_ENABLE_TRADITIONAL_MODE=true

73
.gitignore vendored Normal file
View File

@ -0,0 +1,73 @@
# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
# Credentials and secrets
credentials/
*.json
!package*.json
!**/package*.json
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
# Dependencies
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# Docker
.dockerignore

55
README.md Normal file
View File

@ -0,0 +1,55 @@
# Learn Indonesian App
A Vue.js and Python application for learning Indonesian through realistic everyday scenarios.
## Features
- 🍽️ Restaurant scenarios
- 🛒 Market interactions
- 🚌 Transportation conversations
- 🏨 Hotel check-ins
- 🎤 Speech recognition practice
- ✅ Interactive response checking
## Setup
### Frontend (Vue.js)
```bash
npm install
npm run dev
```
### Backend (Python)
```bash
cd backend
uv sync
uv run python main.py
```
### Development
```bash
# Lint and format
uv run ruff check .
uv run ruff format .
# Run tests
uv run pytest
```
## Usage
1. Start the backend server (port 8000)
2. Start the frontend development server (port 3000)
3. Navigate to http://localhost:3000
4. Choose a scenario and practice Indonesian conversations
## Scenarios
Each scenario includes:
- Realistic dialogue in Indonesian with English translations
- Interactive response input
- Vocabulary explanations
- Speech recognition practice
- Intelligent response checking
Perfect for learning practical Indonesian for everyday situations!

168
SETUP.md Normal file
View File

@ -0,0 +1,168 @@
# Indonesian Learning App with AI Speech Integration
## Setup Instructions
### 1. Prerequisites
- Python 3.11+
- Node.js 16+
- Google Cloud Account
- OpenAI API Key
### 2. Google Cloud Setup
1. Create a new Google Cloud project or use existing one
2. Enable the following APIs:
- Cloud Speech-to-Text API
- Cloud Text-to-Speech API
3. Create a service account with the following roles:
- Speech Client
- Text-to-Speech Client
4. Download the service account key JSON file
### 3. Environment Configuration
#### Backend Configuration
1. Copy the environment template:
```bash
cd backend
cp .env.example .env
```
2. Edit `backend/.env` with your credentials:
```bash
# Required
GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account-key.json
OPENAI_API_KEY=your-openai-api-key-here
# Optional - customize as needed
OPENAI_MODEL=gpt-4o-mini
GOOGLE_CLOUD_PROJECT=your-project-id
SPEECH_LANGUAGE_CODE=id-ID
TTS_VOICE_NAME=id-ID-Standard-A
TTS_VOICE_GENDER=FEMALE
HOST=0.0.0.0
PORT=8000
CORS_ORIGINS=http://localhost:3000,http://localhost:5173
```
#### Frontend Configuration
1. Copy the environment template:
```bash
cp .env.example .env
```
2. Edit `.env` if needed (defaults should work):
```bash
VITE_API_BASE_URL=http://localhost:8000
VITE_WS_BASE_URL=ws://localhost:8000
VITE_ENABLE_SPEECH_FEATURES=true
VITE_ENABLE_AI_CHAT=true
```
### 4. Backend Setup
```bash
cd backend
pip install uv # if not already installed
uv sync
```
### 5. Frontend Setup
```bash
npm install
```
### 6. Running the Application
#### Start the backend:
```bash
cd backend
uv run python main.py
```
The backend will run on `http://localhost:8000`
#### Start the frontend:
```bash
npm run dev
```
The frontend will run on `http://localhost:5173`
### 7. Using the App
1. **Traditional Mode**: The original structured learning experience
2. **AI Chat Mode**: New conversational AI with speech-to-text and text-to-speech
#### AI Chat Features:
- **Speech Input**: Click "🎤 Speak" to record your voice in Indonesian
- **Text Input**: Type messages in Indonesian
- **AI Response**: GPT-4o-mini responds in Indonesian with educational guidance
- **Speech Output**: AI responses are automatically converted to speech
- **Real-time**: WebSocket streaming for low-latency conversation
### 8. Environment Variables Summary
#### Backend (.env file):
```bash
# Required
GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account-key.json
OPENAI_API_KEY=your-openai-api-key
# Optional Configuration
OPENAI_MODEL=gpt-4o-mini
GOOGLE_CLOUD_PROJECT=your-project-id
SPEECH_LANGUAGE_CODE=id-ID
SPEECH_SAMPLE_RATE=48000
SPEECH_ENCODING=WEBM_OPUS
TTS_LANGUAGE_CODE=id-ID
TTS_VOICE_NAME=id-ID-Standard-A
TTS_VOICE_GENDER=FEMALE
TTS_SPEAKING_RATE=1.0
TTS_PITCH=0.0
HOST=0.0.0.0
PORT=8000
DEBUG=false
CORS_ORIGINS=http://localhost:3000,http://localhost:5173
```
#### Frontend (.env file):
```bash
VITE_API_BASE_URL=http://localhost:8000
VITE_WS_BASE_URL=ws://localhost:8000
VITE_DEV_MODE=true
VITE_LOG_LEVEL=info
VITE_ENABLE_SPEECH_FEATURES=true
VITE_ENABLE_AI_CHAT=true
VITE_ENABLE_TRADITIONAL_MODE=true
```
### 9. Testing
- Visit any scenario (warung, ojek, alfamart)
- Toggle between "📝 Traditional" and "🗣️ AI Chat" modes
- Test speech input (requires microphone permission)
- Verify audio output plays automatically
### 10. Troubleshooting
#### Common Issues:
1. **Microphone not working**: Check browser permissions
2. **Audio not playing**: Check browser audio settings
3. **Google Cloud errors**: Verify service account permissions
4. **OpenAI errors**: Check API key and usage limits
5. **WebSocket connection issues**: Check backend is running on port 8000
#### Browser Compatibility:
- Chrome/Edge: Full support
- Firefox: Limited WebRTC support
- Safari: May require additional permissions
### 11. Architecture
```
User speaks → Browser captures audio → WebSocket →
Google Cloud Speech-to-Text → OpenAI GPT-4o-mini →
Google Cloud Text-to-Speech → WebSocket → Browser plays audio
```
### 12. Cost Considerations
- Google Cloud Speech-to-Text: ~$0.006 per 15-second chunk
- Google Cloud Text-to-Speech: ~$0.000004 per character
- OpenAI GPT-4o-mini: ~$0.150 per 1M input tokens, ~$0.600 per 1M output tokens
For typical usage (5-10 minutes of conversation), costs should be under $0.50 per session.

205
STREET_LINGO_README.md Normal file
View File

@ -0,0 +1,205 @@
# Street Lingo Platform 🌍
Learn languages through real-world scenarios with AI-powered conversations.
## Available Languages
### 🇮🇩 Indonesian (Learn Indonesian)
- **URL**: `http://localhost:3000`
- **Scenarios**: Warung, Ojek, Alfamart, Coffee Shop
- **Focus**: Everyday Indonesian conversations and cultural contexts
### 🇩🇪 German (Deutsch lernen in Berlin)
- **URL**: `http://localhost:3001`
- **Scenarios**: Späti, WG Viewing, Bürgeramt, Biergarten, U-Bahn
- **Focus**: Berlin-specific German for expats
## Quick Start
### Prerequisites
- Python 3.8+
- Node.js 16+
- Google Cloud credentials (for Speech-to-Text and Text-to-Speech)
- OpenAI API key
### 1. Environment Setup
```bash
# Backend environment
cp backend/.env.example backend/.env
# Edit backend/.env with your API keys
# Set up Google Cloud credentials
export GOOGLE_APPLICATION_CREDENTIALS="path/to/your/service-account-key.json"
```
### 2. Install Dependencies
```bash
# Backend dependencies
cd backend
pip install -r requirements.txt
# Indonesian app dependencies (if not already installed)
cd ..
npm install
# German app dependencies
cd apps/german-app
npm install
```
### 3. Start All Services
```bash
# From the project root
./start-street-lingo.sh
```
This will start:
- Backend API server on port 8000
- Indonesian app on port 3000
- German app on port 3001
## Manual Setup (Alternative)
### Backend
```bash
cd backend
python main.py
```
### Indonesian App
```bash
npm run dev
```
### German App
```bash
cd apps/german-app
npm run dev
```
## API Endpoints
### Language-Specific Scenarios
- `GET /api/scenarios/indonesian` - Indonesian scenarios
- `GET /api/scenarios/german` - German scenarios
- `GET /api/scenarios` - All scenarios for all languages
### WebSocket Connections
- `ws://localhost:8000/ws/speech/indonesian` - Indonesian speech interface
- `ws://localhost:8000/ws/speech/german` - German speech interface
### Translation
- `POST /api/translate` - Translate text between languages
## Architecture
```
street-lingo/
├── backend/ # Shared backend
│ ├── core/ # Core language-agnostic services
│ ├── languages/ # Language-specific implementations
│ │ ├── indonesian/ # Indonesian models & services
│ │ └── german/ # German models & services
│ └── main.py # Main FastAPI application
├── src/ # Indonesian frontend
├── apps/german-app/ # German frontend
└── start-street-lingo.sh # Startup script
```
## Features
### 🎙️ Speech Recognition
- Real-time speech-to-text in Indonesian and German
- Optimized for conversational speech patterns
### 🗣️ Text-to-Speech
- Character-specific voices for immersive conversations
- Indonesian: Chirp3-HD voices
- German: Neural2 voices with regional characteristics
### 🤖 AI Conversations
- Context-aware conversations using OpenAI GPT
- Goal-based learning with progress tracking
- Cultural and linguistic authenticity
### 🎯 Scenario-Based Learning
- Real-world situations you'll encounter
- Progressive difficulty and goal completion
- Immediate feedback and corrections
## Development
### Adding New Languages
1. Create new language directory in `backend/languages/`
2. Implement language-specific models and services
3. Create frontend app in `apps/[language]-app/`
4. Update routing in `main.py`
### Adding New Scenarios
1. Define scenarios in `backend/languages/[language]/models.py`
2. Create personality with goals and helpful phrases
3. Add to `SCENARIO_PERSONALITIES` dictionary
## Environment Variables
### Backend (.env)
```
OPENAI_API_KEY=your_openai_api_key
GOOGLE_APPLICATION_CREDENTIALS=path/to/service-account.json
SPEECH_ENCODING=WEBM_OPUS
SPEECH_SAMPLE_RATE=48000
TTS_LANGUAGE_CODE=id-ID # or de-DE
HOST=localhost
PORT=8000
DEBUG=true
```
### Frontend
```
VITE_WS_BASE_URL=ws://localhost:8000
VITE_API_BASE_URL=http://localhost:8000
```
## Troubleshooting
### Common Issues
1. **WebSocket Connection Failed**
- Check if backend is running on port 8000
- Verify CORS settings in backend
2. **Speech Recognition Not Working**
- Ensure microphone permissions are granted
- Check Google Cloud credentials
3. **Audio Playback Issues**
- Verify browser audio permissions
- Check TTS service configuration
4. **API Errors**
- Verify OpenAI API key is valid
- Check Google Cloud Speech/TTS quotas
### Logs
- Backend logs: Console output from `python main.py`
- Frontend logs: Browser developer console
- WebSocket logs: Network tab in browser dev tools
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new features
5. Submit a pull request
## License
This project is licensed under the MIT License. See LICENSE file for details.
## Support
For issues and questions:
- Open an issue on GitHub
- Check the troubleshooting section
- Review API documentation at `http://localhost:8000/docs`

109
STRUCTURE.md Normal file
View File

@ -0,0 +1,109 @@
# Project Structure
This project is organized as a monorepo with multiple frontend applications and a shared backend.
## Directory Structure
```
learn-indo/
├── apps/
│ ├── indonesian-app/ # Indonesian language learning app
│ │ ├── src/
│ │ │ ├── components/
│ │ │ │ ├── SpeechInterface.vue
│ │ │ │ └── ScenarioView.vue
│ │ │ ├── App.vue
│ │ │ └── main.js
│ │ ├── package.json
│ │ ├── vite.config.js
│ │ └── index.html
│ │
│ └── german-app/ # German language learning app
│ ├── src/
│ │ ├── components/
│ │ │ └── GermanSpeechInterface.vue
│ │ ├── views/
│ │ │ ├── HomeView.vue
│ │ │ └── ScenarioView.vue
│ │ ├── App.vue
│ │ └── main.js
│ ├── package.json
│ ├── vite.config.js
│ └── index.html
├── backend/ # Shared FastAPI backend
│ ├── languages/
│ │ ├── indonesian/
│ │ └── german/
│ ├── core/
│ ├── main.py
│ └── pyproject.toml
├── package.json # Root workspace configuration
└── start-street-lingo.sh # Development startup script
```
## Applications
### Indonesian App
- **Port**: 3000
- **URL**: http://localhost:3000
- **Features**: Indonesian language learning with speech recognition and AI conversation
### German App
- **Port**: 3001
- **URL**: http://localhost:3001
- **Features**: German language learning with speech recognition and AI conversation
### Backend API
- **Port**: 8000
- **URL**: http://localhost:8000
- **Features**: Shared API serving both frontend applications
## Development Commands
### Root Level Commands
```bash
# Install dependencies for all apps
npm run install:all
# Start all services (backend + both frontends)
npm run dev:all
# Build all frontend apps
npm run build:all
# Start individual services
npm run dev:indonesian
npm run dev:german
npm run dev:backend
```
### Individual App Commands
```bash
# Indonesian app
cd apps/indonesian-app
npm install
npm run dev
# German app
cd apps/german-app
npm install
npm run dev
```
### Quick Start
```bash
# Use the convenience script
./start-street-lingo.sh
```
## API Endpoints
- **Indonesian scenarios**: `/api/scenarios/indonesian`
- **German scenarios**: `/api/scenarios/german`
- **WebSocket - Indonesian**: `/ws/speech/indonesian`
- **WebSocket - German**: `/ws/speech/german`
- **Conversation feedback**: `/api/conversation-feedback`
- **Suggestions**: `/api/suggestions`
- **Translation**: `/api/translate`

View File

@ -0,0 +1,35 @@
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Build the application
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy built assets from builder stage
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy nginx configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port 80
EXPOSE 80
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:80/ || exit 1
# Start nginx
CMD ["nginx", "-g", "daemon off;"]

View File

@ -0,0 +1,14 @@
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Street Lingo - Deutsch lernen in Berlin</title>
<meta name="description" content="Learn German through real Berlin scenarios. Master everyday German conversations with locals." />
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.js"></script>
</body>
</html>

View File

@ -0,0 +1,39 @@
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/xml+rss
application/json;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Handle client-side routing
location / {
try_files $uri $uri/ /index.html;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}

1154
apps/german-app/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,18 @@
{
"name": "street-lingo-german",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "vite --port 3001",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"vue": "^3.4.0",
"vue-router": "^4.2.5"
},
"devDependencies": {
"@vitejs/plugin-vue": "^4.5.0",
"vite": "^5.0.0"
}
}

348
apps/german-app/src/App.vue Normal file
View File

@ -0,0 +1,348 @@
<template>
<div id="app">
<header class="header">
<div class="header-content">
<div class="logo-section">
<div class="logo">
<span class="flag">🇩🇪</span>
<h1>Street Lingo</h1>
</div>
<p class="tagline">Deutsch lernen in Berlin - Learn German through real Berlin scenarios</p>
</div>
</div>
</header>
<nav class="scenario-nav">
<div class="nav-container">
<div class="nav-tabs">
<a
v-for="scenario in scenarios"
:key="scenario.type"
@click="handleScenarioSwitch(scenario.type)"
:class="['nav-tab', { active: $route.params.type === scenario.type }]"
href="#"
>
<span class="tab-emoji">{{ scenario.emoji }}</span>
<span class="tab-text">{{ scenario.name }}</span>
</a>
</div>
</div>
</nav>
<main class="main-content">
<router-view />
</main>
</div>
</template>
<script>
export default {
name: 'App',
data() {
return {
scenarios: [],
hasConversationProgress: false
}
},
provide() {
return {
updateConversationProgress: this.updateConversationProgress
}
},
async mounted() {
await this.loadScenarios()
},
methods: {
async loadScenarios() {
try {
const response = await fetch('/api/scenarios/german')
const scenariosData = await response.json()
this.scenarios = Object.entries(scenariosData).map(([type, scenario]) => ({
type: type,
name: scenario.title,
emoji: this.getScenarioEmoji(type),
description: scenario.description,
challenge: scenario.challenge,
goal: scenario.goal
}))
} catch (error) {
console.error('Failed to load scenarios:', error)
this.scenarios = [
{ type: 'spati', name: 'At a Späti', emoji: '🏪' },
{ type: 'wg_viewing', name: 'WG Room Viewing', emoji: '🏠' },
{ type: 'burgeramt', name: 'At the Bürgeramt', emoji: '🏛️' },
{ type: 'biergarten', name: 'At a Biergarten', emoji: '🍺' },
{ type: 'ubahn', name: 'U-Bahn Help', emoji: '🚇' }
]
}
},
getScenarioEmoji(type) {
const emojiMap = {
'spati': '🏪',
'wg_viewing': '🏠',
'burgeramt': '🏛️',
'biergarten': '🍺',
'ubahn': '🚇'
}
return emojiMap[type] || '📍'
},
updateConversationProgress(hasProgress) {
this.hasConversationProgress = hasProgress
},
handleScenarioSwitch(newScenarioType) {
event.preventDefault()
if (this.$route.params.type === newScenarioType) {
return
}
if (this.hasConversationProgress) {
const confirmed = confirm(
`Das Wechseln der Szenarien setzt Ihren aktuellen Gesprächsverlauf zurück.\n\nSind Sie sicher, dass Sie zu "${this.scenarios.find(s => s.type === newScenarioType)?.name}" wechseln möchten?`
)
if (!confirmed) {
return
}
}
this.$router.push(`/scenario/${newScenarioType}`)
}
}
}
</script>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&family=Crimson+Pro:wght@400;500;600;700&family=DM+Sans:wght@300;400;500;600;700&display=swap');
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
:root {
/* German/Berlin-inspired colors */
--primary: #d4af37; /* German gold */
--primary-light: #e8c547;
--primary-dark: #b8941f;
--secondary: #cc0000; /* German red */
--accent: #2d5a3d; /* Forest green */
--surface: #f8f9fa; /* Light gray */
--surface-alt: #e9ecef;
--surface-dark: #2d3748;
--text: #1a1a1a; /* Dark gray */
--text-light: #495057;
--text-muted: #6c757d;
--border: #dee2e6;
--shadow-sm: 0 1px 2px 0 rgb(29 35 42 / 0.05);
--shadow: 0 4px 6px -1px rgb(29 35 42 / 0.1);
--shadow-lg: 0 10px 15px -3px rgb(29 35 42 / 0.1);
--radius: 12px;
--radius-lg: 16px;
}
body {
font-family: 'DM Sans', -apple-system, BlinkMacSystemFont, system-ui, sans-serif;
background: var(--surface);
min-height: 100vh;
color: var(--text);
font-feature-settings: 'liga' 1, 'kern' 1;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
#app {
min-height: 100vh;
display: flex;
flex-direction: column;
}
.header {
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
color: white;
padding: 1.5rem 0;
position: sticky;
top: 0;
z-index: 100;
backdrop-filter: blur(12px);
box-shadow: var(--shadow);
}
.header-content {
max-width: 1200px;
margin: 0 auto;
padding: 0 2rem;
}
.logo-section {
text-align: center;
}
.logo {
display: flex;
align-items: center;
justify-content: center;
gap: 0.75rem;
margin-bottom: 0.5rem;
}
.flag {
font-size: 2rem;
filter: drop-shadow(0 2px 4px rgba(0, 0, 0, 0.2));
}
.logo h1 {
font-family: 'Crimson Pro', serif;
font-size: 2rem;
font-weight: 600;
letter-spacing: -0.02em;
text-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
}
.tagline {
font-size: 1rem;
opacity: 0.9;
font-weight: 400;
letter-spacing: 0.01em;
text-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.scenario-nav {
background: var(--surface-alt);
border-bottom: 1px solid var(--border);
padding: 1rem 0;
box-shadow: var(--shadow-sm);
}
.nav-container {
max-width: 1200px;
margin: 0 auto;
padding: 0 2rem;
}
.nav-tabs {
display: flex;
gap: 0.5rem;
justify-content: center;
flex-wrap: wrap;
}
.nav-tab {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.75rem 1.25rem;
border-radius: var(--radius);
text-decoration: none;
color: var(--text-light);
font-weight: 500;
font-size: 0.9rem;
transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
background: var(--surface);
border: 1px solid var(--border);
position: relative;
overflow: hidden;
}
.nav-tab::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
opacity: 0;
transition: opacity 0.2s ease;
z-index: -1;
}
.nav-tab:hover {
color: var(--text);
border-color: var(--primary-light);
box-shadow: var(--shadow-sm);
transform: translateY(-1px);
}
.nav-tab.active {
color: white;
background: var(--primary);
border-color: var(--primary);
box-shadow: var(--shadow);
}
.nav-tab.active::before {
opacity: 0;
}
.tab-emoji {
font-size: 1.1rem;
filter: grayscale(0.3);
transition: filter 0.2s ease;
}
.nav-tab:hover .tab-emoji,
.nav-tab.active .tab-emoji {
filter: grayscale(0);
}
.tab-text {
white-space: nowrap;
}
.main-content {
flex: 1;
padding: 2rem;
max-width: 1400px;
margin: 0 auto;
width: 100%;
}
@media (max-width: 768px) {
.header-content,
.nav-container {
padding: 0 1rem;
}
.logo h1 {
font-size: 1.75rem;
}
.tagline {
font-size: 0.9rem;
}
.nav-tabs {
gap: 0.25rem;
}
.nav-tab {
padding: 0.625rem 1rem;
font-size: 0.85rem;
}
.main-content {
padding: 1rem;
}
}
@media (max-width: 480px) {
.logo {
flex-direction: column;
gap: 0.5rem;
}
.nav-tabs {
flex-direction: column;
align-items: center;
}
.nav-tab {
width: 100%;
max-width: 280px;
justify-content: center;
}
}
</style>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,16 @@
import { createApp } from 'vue'
import { createRouter, createWebHistory } from 'vue-router'
import App from './App.vue'
import ScenarioView from './views/ScenarioView.vue'
const routes = [
{ path: '/', redirect: '/scenario/spati' },
{ path: '/scenario/:type', component: ScenarioView, props: true }
]
const router = createRouter({
history: createWebHistory(),
routes
})
createApp(App).use(router).mount('#app')

View File

@ -0,0 +1,438 @@
<template>
<div class="home-container">
<div class="hero-section">
<div class="hero-content">
<h1>Willkommen bei Street Lingo</h1>
<p class="hero-subtitle">
Master everyday German through real Berlin scenarios
</p>
<p class="hero-description">
Learn German the way Berliners actually speak. Practice conversations in real-life situations -
from ordering at a Späti to navigating the Bürgeramt.
</p>
<div class="cta-buttons">
<button @click="startLearning" class="primary-btn">
🚀 Start Learning
</button>
</div>
</div>
<div class="hero-image">
<div class="berlin-icon">🏙</div>
</div>
</div>
<div class="features-section">
<h2>Why Street Lingo?</h2>
<div class="features-grid">
<div class="feature-card">
<div class="feature-icon">🎯</div>
<h3>Real Berlin Scenarios</h3>
<p>Practice conversations in authentic situations you'll encounter as a Berlin expat</p>
</div>
<div class="feature-card">
<div class="feature-icon">🎙</div>
<h3>Speech Recognition</h3>
<p>Improve your pronunciation with advanced German speech recognition technology</p>
</div>
<div class="feature-card">
<div class="feature-icon">🏆</div>
<h3>Goal-Based Learning</h3>
<p>Complete specific objectives in each scenario to track your progress</p>
</div>
<div class="feature-card">
<div class="feature-icon">🗣</div>
<h3>Natural Conversations</h3>
<p>AI-powered conversations that adapt to your German level and learning pace</p>
</div>
</div>
</div>
<div class="scenarios-section">
<h2>Berlin Scenarios</h2>
<div class="scenarios-grid">
<div
v-for="scenario in scenarios"
:key="scenario.type"
class="scenario-card"
@click="selectScenario(scenario.type)"
>
<div class="scenario-header">
<span class="scenario-emoji">{{ getScenarioEmoji(scenario.type) }}</span>
<h3>{{ scenario.name }}</h3>
</div>
<p class="scenario-description">{{ scenario.description }}</p>
<div class="scenario-challenge">
<strong>Challenge:</strong> {{ scenario.challenge }}
</div>
<div class="scenario-goal">
<strong>Goal:</strong> {{ scenario.goal }}
</div>
</div>
</div>
</div>
<div class="about-section">
<h2>Perfect for Berlin Expats</h2>
<div class="about-content">
<p>
Whether you're navigating German bureaucracy, looking for a WG room, or just want to chat
with your neighbors, Street Lingo helps you learn the German you actually need in Berlin.
</p>
<p>
Our AI characters speak like real Berliners - casual, direct, and authentic. No textbook
German here, just the language you'll hear on the streets of Kreuzberg, Mitte, and beyond.
</p>
</div>
</div>
</div>
</template>
<script>
export default {
name: 'HomeView',
data() {
return {
scenarios: []
}
},
async mounted() {
await this.loadScenarios()
},
methods: {
async loadScenarios() {
try {
const response = await fetch('/api/scenarios/german')
const scenariosData = await response.json()
this.scenarios = Object.entries(scenariosData).map(([type, scenario]) => ({
type: type,
name: scenario.title,
description: scenario.description,
challenge: scenario.challenge,
goal: scenario.goal
}))
} catch (error) {
console.error('Failed to load scenarios:', error)
this.scenarios = [
{
type: 'spati',
name: 'At a Späti',
description: 'Buy late-night essentials at a Berlin convenience store',
challenge: 'Understanding Berlin street German and Späti culture',
goal: 'Buy a beer and some snacks'
},
{
type: 'wg_viewing',
name: 'WG Room Viewing',
description: 'View a shared apartment room in Berlin',
challenge: 'Housing terminology and presenting yourself as a good flatmate',
goal: 'Ask about rent, house rules, and express interest'
},
{
type: 'burgeramt',
name: 'At the Bürgeramt',
description: 'Deal with German bureaucracy and registration',
challenge: 'Formal German and bureaucratic terminology',
goal: 'Complete address registration (Anmeldung)'
},
{
type: 'biergarten',
name: 'At a Biergarten',
description: 'Order drinks and food at a traditional beer garden',
challenge: 'German beer terminology and traditional etiquette',
goal: 'Order a beer and traditional German food'
},
{
type: 'ubahn',
name: 'U-Bahn Help',
description: 'Get help with public transport in Berlin',
challenge: 'Transport terminology and directions',
goal: 'Get directions and buy appropriate ticket'
}
]
}
},
getScenarioEmoji(type) {
const emojiMap = {
'spati': '🏪',
'wg_viewing': '🏠',
'burgeramt': '🏛️',
'biergarten': '🍺',
'ubahn': '🚇'
}
return emojiMap[type] || '📍'
},
selectScenario(scenarioType) {
this.$router.push(`/scenario/${scenarioType}`)
},
startLearning() {
if (this.scenarios.length > 0) {
this.selectScenario(this.scenarios[0].type)
} else {
this.selectScenario('spati')
}
}
}
}
</script>
<style scoped>
.home-container {
max-width: 1200px;
margin: 0 auto;
padding: 0 2rem;
}
.hero-section {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 4rem;
align-items: center;
margin-bottom: 4rem;
padding: 2rem 0;
}
.hero-content h1 {
font-family: 'Crimson Pro', serif;
font-size: 3rem;
font-weight: 700;
color: var(--text);
margin-bottom: 1rem;
line-height: 1.1;
}
.hero-subtitle {
font-size: 1.5rem;
color: var(--primary);
font-weight: 600;
margin-bottom: 1.5rem;
}
.hero-description {
font-size: 1.1rem;
color: var(--text-light);
line-height: 1.6;
margin-bottom: 2rem;
}
.cta-buttons {
display: flex;
gap: 1rem;
}
.primary-btn {
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
color: white;
border: none;
padding: 1rem 2rem;
border-radius: var(--radius-lg);
font-size: 1.1rem;
font-weight: 600;
cursor: pointer;
transition: all 0.3s ease;
box-shadow: var(--shadow);
}
.primary-btn:hover {
transform: translateY(-2px);
box-shadow: var(--shadow-lg);
}
.hero-image {
display: flex;
align-items: center;
justify-content: center;
}
.berlin-icon {
font-size: 8rem;
opacity: 0.8;
filter: drop-shadow(0 4px 8px rgba(0, 0, 0, 0.1));
}
.features-section {
margin-bottom: 4rem;
}
.features-section h2 {
font-family: 'Crimson Pro', serif;
font-size: 2.5rem;
text-align: center;
color: var(--text);
margin-bottom: 3rem;
}
.features-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(280px, 1fr));
gap: 2rem;
}
.feature-card {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 2rem;
text-align: center;
transition: all 0.3s ease;
box-shadow: var(--shadow-sm);
}
.feature-card:hover {
transform: translateY(-4px);
box-shadow: var(--shadow-lg);
border-color: var(--primary-light);
}
.feature-icon {
font-size: 3rem;
margin-bottom: 1rem;
}
.feature-card h3 {
font-size: 1.25rem;
font-weight: 600;
color: var(--text);
margin-bottom: 1rem;
}
.feature-card p {
color: var(--text-light);
line-height: 1.5;
}
.scenarios-section {
margin-bottom: 4rem;
}
.scenarios-section h2 {
font-family: 'Crimson Pro', serif;
font-size: 2.5rem;
text-align: center;
color: var(--text);
margin-bottom: 3rem;
}
.scenarios-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(320px, 1fr));
gap: 2rem;
}
.scenario-card {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 2rem;
transition: all 0.3s ease;
cursor: pointer;
box-shadow: var(--shadow-sm);
}
.scenario-card:hover {
transform: translateY(-4px);
box-shadow: var(--shadow-lg);
border-color: var(--primary-light);
}
.scenario-header {
display: flex;
align-items: center;
gap: 1rem;
margin-bottom: 1rem;
}
.scenario-emoji {
font-size: 2rem;
}
.scenario-header h3 {
font-size: 1.25rem;
font-weight: 600;
color: var(--text);
}
.scenario-description {
color: var(--text-light);
line-height: 1.5;
margin-bottom: 1rem;
}
.scenario-challenge,
.scenario-goal {
font-size: 0.9rem;
color: var(--text-light);
margin-bottom: 0.5rem;
}
.scenario-challenge strong,
.scenario-goal strong {
color: var(--primary);
}
.about-section {
background: var(--surface-alt);
border-radius: var(--radius-lg);
padding: 3rem;
text-align: center;
}
.about-section h2 {
font-family: 'Crimson Pro', serif;
font-size: 2.5rem;
color: var(--text);
margin-bottom: 2rem;
}
.about-content p {
font-size: 1.1rem;
color: var(--text-light);
line-height: 1.6;
margin-bottom: 1.5rem;
max-width: 800px;
margin-left: auto;
margin-right: auto;
}
.about-content p:last-child {
margin-bottom: 0;
}
@media (max-width: 768px) {
.home-container {
padding: 0 1rem;
}
.hero-section {
grid-template-columns: 1fr;
gap: 2rem;
text-align: center;
}
.hero-content h1 {
font-size: 2.5rem;
}
.hero-subtitle {
font-size: 1.25rem;
}
.berlin-icon {
font-size: 6rem;
}
.features-section h2,
.scenarios-section h2,
.about-section h2 {
font-size: 2rem;
}
.scenarios-grid {
grid-template-columns: 1fr;
}
.about-section {
padding: 2rem;
}
}
</style>

View File

@ -0,0 +1,456 @@
<template>
<div class="scenario-container">
<div class="scenario-header" v-if="scenarioData">
<h2>{{ getScenarioEmoji(type) }} {{ scenarioData.title }}</h2>
<div class="scenario-goal">
<strong>Ziel:</strong> {{ scenarioData.goal }}
</div>
</div>
<div class="conversation-area">
<div class="left-panel">
<div class="character-section" v-if="scenarioData">
<div class="character-header">
<div class="character-avatar">
{{ getCharacterAvatar(type) }}
</div>
<div class="character-info">
<h3>{{ scenarioData.character }}</h3>
<p class="character-description">{{ scenarioData.character_background }}</p>
</div>
</div>
</div>
<div class="ai-chat-section">
<GermanSpeechInterface :scenario="type" />
</div>
</div>
<div class="right-panel">
<div class="context-section" v-if="scenarioData">
<h3>📍 Situationskontext</h3>
<div class="context-info">
<div class="context-item">
<div class="context-label">Ort</div>
<div class="context-value">{{ scenarioData.location }}</div>
</div>
<div class="context-item">
<div class="context-label">Beschreibung</div>
<div class="context-value">{{ scenarioData.description }}</div>
</div>
<div class="context-item">
<div class="context-label">Herausforderung</div>
<div class="context-value">{{ scenarioData.challenge }}</div>
</div>
</div>
</div>
<div class="helpful-phrases" v-if="scenarioData?.helpful_phrases">
<h4>💬 Hilfreiche Phrasen:</h4>
<div class="phrases-list">
<div
v-for="phrase in scenarioData.helpful_phrases"
:key="phrase.native"
class="phrase-item"
>
<div class="phrase-german">{{ phrase.native }}</div>
<div class="phrase-english">{{ phrase.english }}</div>
</div>
</div>
</div>
</div>
</div>
</div>
</template>
<script>
import GermanSpeechInterface from '../components/GermanSpeechInterface.vue'
export default {
name: 'ScenarioView',
components: {
GermanSpeechInterface
},
props: {
type: {
type: String,
required: true
}
},
data() {
return {
scenarioData: null
}
},
async mounted() {
await this.loadScenarioData()
},
watch: {
type: {
immediate: true,
async handler(newType) {
if (newType) {
await this.loadScenarioData()
}
}
}
},
methods: {
async loadScenarioData() {
try {
console.log('Loading German scenario data for type:', this.type)
const response = await fetch('/api/scenarios/german')
const scenarios = await response.json()
console.log('All German scenarios:', scenarios)
this.scenarioData = scenarios[this.type]
console.log('Selected German scenario data:', this.scenarioData)
} catch (error) {
console.error('Failed to load German scenario data:', error)
}
},
getScenarioEmoji(type) {
const emojiMap = {
'spati': '🏪',
'wg_viewing': '🏠',
'burgeramt': '🏛️',
'biergarten': '🍺',
'ubahn': '🚇'
}
return emojiMap[type] || '📍'
},
getCharacterAvatar(type) {
const avatarMap = {
'spati': '👨‍💼',
'wg_viewing': '👩‍🎓',
'burgeramt': '👩‍💼',
'biergarten': '👨‍🍳',
'ubahn': '👨‍🚀'
}
return avatarMap[type] || '👤'
}
}
}
</script>
<style scoped>
.scenario-container {
max-width: 1400px;
margin: 0 auto;
width: 100%;
padding: 0 2rem;
}
.scenario-header {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 2rem;
margin-bottom: 2rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.scenario-header::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
}
.scenario-header h2 {
font-family: 'Crimson Pro', serif;
font-size: 1.75rem;
font-weight: 600;
color: var(--text);
margin-bottom: 0.5rem;
letter-spacing: -0.01em;
}
.scenario-goal {
color: var(--text-light);
font-size: 1rem;
font-weight: 400;
}
.scenario-goal strong {
color: var(--primary);
font-weight: 500;
}
.conversation-area {
display: grid;
grid-template-columns: 1fr 320px;
gap: 2rem;
min-height: calc(100vh - 300px);
}
.left-panel {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.character-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.character-section::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--accent);
}
.character-header {
display: flex;
align-items: center;
gap: 1rem;
margin-bottom: 1rem;
}
.character-avatar {
width: 60px;
height: 60px;
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.75rem;
box-shadow: var(--shadow);
position: relative;
}
.character-info h3 {
font-family: 'Crimson Pro', serif;
font-size: 1.25rem;
font-weight: 600;
color: var(--text);
margin: 0;
letter-spacing: -0.01em;
}
.character-description {
color: var(--text-light);
margin: 0;
line-height: 1.5;
font-size: 0.9rem;
}
.ai-chat-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
overflow: hidden;
box-shadow: var(--shadow-sm);
min-height: 500px;
}
.right-panel {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.context-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.context-section::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--primary);
}
.context-section h3 {
font-size: 1rem;
font-weight: 600;
color: var(--text);
margin: 0 0 1rem 0;
display: flex;
align-items: center;
gap: 0.5rem;
}
.context-info {
display: flex;
flex-direction: column;
gap: 1rem;
}
.context-item {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.context-label {
font-size: 0.8rem;
font-weight: 500;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.05em;
}
.context-value {
color: var(--text-light);
line-height: 1.4;
font-size: 0.9rem;
}
.helpful-phrases {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.helpful-phrases::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--accent);
}
.helpful-phrases h4 {
font-size: 1rem;
font-weight: 600;
color: var(--text);
margin: 0 0 1rem 0;
display: flex;
align-items: center;
gap: 0.5rem;
}
.phrases-list {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.phrase-item {
background: var(--surface-alt);
border: 1px solid var(--border);
border-radius: var(--radius);
padding: 1rem;
transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
cursor: pointer;
position: relative;
}
.phrase-item:hover {
border-color: var(--primary-light);
box-shadow: var(--shadow-sm);
transform: translateY(-1px);
}
.phrase-item::before {
content: '';
position: absolute;
left: 0;
top: 0;
bottom: 0;
width: 3px;
background: var(--accent);
border-radius: 0 2px 2px 0;
transform: scaleY(0);
transition: transform 0.2s ease;
}
.phrase-item:hover::before {
transform: scaleY(1);
}
.phrase-german {
font-weight: 500;
color: var(--text);
margin-bottom: 0.25rem;
font-size: 0.9rem;
}
.phrase-english {
font-size: 0.8rem;
color: var(--text-muted);
font-style: italic;
opacity: 0.8;
}
@media (max-width: 1024px) {
.conversation-area {
grid-template-columns: 1fr;
gap: 1.5rem;
}
.character-section {
padding: 1.25rem;
}
.character-header {
justify-content: center;
text-align: center;
}
.ai-chat-section {
min-height: 400px;
}
}
@media (max-width: 768px) {
.scenario-container {
padding: 0 1rem;
}
.scenario-header {
padding: 1.5rem;
}
.scenario-header h2 {
font-size: 1.5rem;
}
.character-section,
.context-section,
.helpful-phrases {
padding: 1rem;
}
.character-avatar {
width: 50px;
height: 50px;
font-size: 1.5rem;
}
.character-info h3 {
font-size: 1.1rem;
}
}
</style>

View File

@ -0,0 +1,20 @@
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
export default defineConfig({
plugins: [vue()],
server: {
port: 3001,
proxy: {
'/api': {
target: 'http://localhost:8000',
changeOrigin: true
},
'/ws': {
target: 'ws://localhost:8000',
ws: true,
changeOrigin: true
}
}
}
})

View File

@ -0,0 +1,35 @@
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Build the application
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy built assets from builder stage
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy nginx configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port 80
EXPOSE 80
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:80/ || exit 1
# Start nginx
CMD ["nginx", "-g", "daemon off;"]

View File

@ -0,0 +1,12 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Learn Indonesian - Realistic Scenarios</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.js"></script>
</body>
</html>

View File

@ -0,0 +1,39 @@
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/xml+rss
application/json;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Handle client-side routing
location / {
try_files $uri $uri/ /index.html;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}

1102
apps/indonesian-app/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,19 @@
{
"name": "learn-indonesian-app",
"version": "1.0.0",
"description": "A Vue.js app for learning Indonesian through realistic scenarios",
"scripts": {
"dev": "vite --port 3000",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"vue": "^3.3.4",
"vue-router": "^4.2.4",
"axios": "^1.4.0"
},
"devDependencies": {
"@vitejs/plugin-vue": "^4.2.3",
"vite": "^4.4.5"
}
}

View File

@ -0,0 +1,343 @@
<template>
<div id="app">
<header class="header">
<div class="header-content">
<div class="logo-section">
<div class="logo">
<span class="flag">🇮🇩</span>
<h1>Learn Indonesian</h1>
</div>
<p class="tagline">Learn Indonesian through everyday scenarios</p>
</div>
</div>
</header>
<nav class="scenario-nav">
<div class="nav-container">
<div class="nav-tabs">
<a
v-for="scenario in scenarios"
:key="scenario.type"
@click="handleScenarioSwitch(scenario.type)"
:class="['nav-tab', { active: $route.params.type === scenario.type }]"
href="#"
>
<span class="tab-emoji">{{ scenario.emoji }}</span>
<span class="tab-text">{{ scenario.name }}</span>
</a>
</div>
</div>
</nav>
<main class="main-content">
<router-view />
</main>
</div>
</template>
<script>
export default {
name: 'App',
data() {
return {
scenarios: [],
hasConversationProgress: false
}
},
provide() {
return {
updateConversationProgress: this.updateConversationProgress
}
},
async mounted() {
await this.loadScenarios()
},
methods: {
async loadScenarios() {
try {
const response = await fetch('/api/scenarios/indonesian')
const scenariosData = await response.json()
this.scenarios = Object.entries(scenariosData).map(([type, scenario]) => ({
type: type,
name: scenario.title,
emoji: this.getScenarioEmoji(type),
description: scenario.description,
challenge: scenario.challenge,
goal: scenario.goal
}))
} catch (error) {
console.error('Failed to load scenarios:', error)
this.scenarios = [
{ type: 'warung', name: 'At a Warung', emoji: '🍜' },
{ type: 'ojek', name: 'Taking an Ojek', emoji: '🏍️' },
{ type: 'alfamart', name: 'At Alfamart', emoji: '🏪' },
{ type: 'coffee_shop', name: 'Coffee Shop Small Talk', emoji: '☕' }
]
}
},
getScenarioEmoji(type) {
const emojiMap = {
'warung': '🍜',
'ojek': '🏍️',
'alfamart': '🏪',
'coffee_shop': '☕'
}
return emojiMap[type] || '📍'
},
updateConversationProgress(hasProgress) {
this.hasConversationProgress = hasProgress
},
handleScenarioSwitch(newScenarioType) {
event.preventDefault()
if (this.$route.params.type === newScenarioType) {
return
}
if (this.hasConversationProgress) {
const confirmed = confirm(
`Switching scenarios will reset your current conversation progress.\n\nAre you sure you want to switch to "${this.scenarios.find(s => s.type === newScenarioType)?.name}"?`
)
if (!confirmed) {
return
}
}
this.$router.push(`/scenario/${newScenarioType}`)
}
}
}
</script>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&family=Crimson+Pro:wght@400;500;600;700&family=DM+Sans:wght@300;400;500;600;700&display=swap');
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
:root {
--primary: #c53030;
--primary-light: #e53e3e;
--primary-dark: #9c1c1c;
--secondary: #dd6b20;
--accent: #38a169;
--surface: #fefcf7;
--surface-alt: #f7f5f0;
--surface-dark: #2d3748;
--text: #2d3748;
--text-light: #4a5568;
--text-muted: #718096;
--border: #e2d8cc;
--shadow-sm: 0 1px 2px 0 rgb(45 55 72 / 0.05);
--shadow: 0 4px 6px -1px rgb(45 55 72 / 0.1);
--shadow-lg: 0 10px 15px -3px rgb(45 55 72 / 0.1);
--radius: 12px;
--radius-lg: 16px;
}
body {
font-family: 'DM Sans', -apple-system, BlinkMacSystemFont, system-ui, sans-serif;
background: var(--surface);
min-height: 100vh;
color: var(--text);
font-feature-settings: 'liga' 1, 'kern' 1;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
#app {
min-height: 100vh;
display: flex;
flex-direction: column;
}
.header {
background: var(--surface);
border-bottom: 1px solid var(--border);
padding: 1.5rem 0;
position: sticky;
top: 0;
z-index: 100;
backdrop-filter: blur(12px);
-webkit-backdrop-filter: blur(12px);
}
.header-content {
max-width: 1200px;
margin: 0 auto;
padding: 0 2rem;
}
.logo-section {
text-align: center;
}
.logo {
display: flex;
align-items: center;
justify-content: center;
gap: 0.75rem;
margin-bottom: 0.5rem;
}
.flag {
font-size: 2rem;
filter: drop-shadow(0 2px 4px rgba(0, 0, 0, 0.1));
}
.logo h1 {
font-family: 'Crimson Pro', serif;
font-size: 2rem;
font-weight: 600;
color: var(--text);
letter-spacing: -0.02em;
}
.tagline {
font-size: 1rem;
color: var(--text-light);
font-weight: 400;
letter-spacing: 0.01em;
}
.scenario-nav {
background: var(--surface-alt);
border-bottom: 1px solid var(--border);
padding: 1rem 0;
}
.nav-container {
max-width: 1200px;
margin: 0 auto;
padding: 0 2rem;
}
.nav-tabs {
display: flex;
gap: 0.5rem;
justify-content: center;
flex-wrap: wrap;
}
.nav-tab {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.75rem 1.25rem;
border-radius: var(--radius);
text-decoration: none;
color: var(--text-light);
font-weight: 500;
font-size: 0.9rem;
transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
background: var(--surface);
border: 1px solid var(--border);
position: relative;
overflow: hidden;
}
.nav-tab::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(135deg, var(--primary) 0%, var(--primary-light) 100%);
opacity: 0;
transition: opacity 0.2s ease;
z-index: -1;
}
.nav-tab:hover {
color: var(--text);
border-color: var(--primary-light);
box-shadow: var(--shadow-sm);
transform: translateY(-1px);
}
.nav-tab.active {
color: white;
background: var(--primary);
border-color: var(--primary);
box-shadow: var(--shadow);
}
.nav-tab.active::before {
opacity: 0;
}
.tab-emoji {
font-size: 1.1rem;
filter: grayscale(0.3);
transition: filter 0.2s ease;
}
.nav-tab:hover .tab-emoji,
.nav-tab.active .tab-emoji {
filter: grayscale(0);
}
.tab-text {
white-space: nowrap;
}
.main-content {
flex: 1;
padding: 2rem;
max-width: 1400px;
margin: 0 auto;
width: 100%;
}
@media (max-width: 768px) {
.header-content,
.nav-container {
padding: 0 1rem;
}
.logo h1 {
font-size: 1.75rem;
}
.tagline {
font-size: 0.9rem;
}
.nav-tabs {
gap: 0.25rem;
}
.nav-tab {
padding: 0.625rem 1rem;
font-size: 0.85rem;
}
.main-content {
padding: 1rem;
}
}
@media (max-width: 480px) {
.logo {
flex-direction: column;
gap: 0.5rem;
}
.nav-tabs {
flex-direction: column;
align-items: center;
}
.nav-tab {
width: 100%;
max-width: 200px;
justify-content: center;
}
}
</style>

View File

@ -0,0 +1,428 @@
<template>
<div class="scenario-container">
<div class="scenario-header" v-if="scenarioData">
<h2>{{ getScenarioEmoji(type) }} {{ scenarioData.title }}</h2>
<div class="scenario-goal">
<strong>Goal:</strong> {{ scenarioData.goal }}
</div>
</div>
<div class="conversation-area">
<div class="left-panel">
<div class="character-section" v-if="scenarioData">
<div class="character-header">
<div class="character-avatar">
{{ getCharacterAvatar(type) }}
</div>
<div class="character-info">
<h3>{{ scenarioData.character }}</h3>
<p class="character-description">{{ scenarioData.character_background }}</p>
</div>
</div>
</div>
<div class="ai-chat-section">
<SpeechInterface :scenario="type" />
</div>
</div>
<div class="right-panel">
<div class="context-section" v-if="scenarioData">
<h3>📍 Situation Context</h3>
<div class="context-info">
<div class="context-item">
<div class="context-label">Location</div>
<div class="context-value">{{ scenarioData.location }}</div>
</div>
<div class="context-item">
<div class="context-label">Description</div>
<div class="context-value">{{ scenarioData.description }}</div>
</div>
<div class="context-item">
<div class="context-label">Challenge</div>
<div class="context-value">{{ scenarioData.challenge }}</div>
</div>
</div>
</div>
<div class="helpful-phrases" v-if="scenarioData?.helpful_phrases">
<h4>💬 Helpful Phrases:</h4>
<div class="phrases-list">
<div
v-for="phrase in scenarioData.helpful_phrases"
:key="phrase.indonesian"
class="phrase-item"
>
<div class="phrase-indonesian">{{ phrase.indonesian }}</div>
<div class="phrase-english">{{ phrase.english }}</div>
</div>
</div>
</div>
</div>
</div>
</div>
</template>
<script>
import SpeechInterface from './SpeechInterface.vue'
export default {
name: 'ScenarioView',
components: {
SpeechInterface
},
props: {
type: {
type: String,
required: true
}
},
data() {
return {
scenarioData: null
}
},
async mounted() {
await this.loadScenarioData()
},
watch: {
type: {
immediate: true,
async handler(newType) {
if (newType) {
await this.loadScenarioData()
}
}
}
},
methods: {
async loadScenarioData() {
try {
console.log('Loading scenario data for type:', this.type)
const response = await fetch('/api/scenarios/indonesian')
const scenarios = await response.json()
console.log('All scenarios:', scenarios)
this.scenarioData = scenarios[this.type]
console.log('Selected scenario data:', this.scenarioData)
} catch (error) {
console.error('Failed to load scenario data:', error)
}
},
getScenarioEmoji(type) {
const emojiMap = {
'warung': '🍜',
'ojek': '🏍️',
'alfamart': '🏪',
'coffee_shop': '☕'
}
return emojiMap[type] || '📍'
},
getCharacterAvatar(type) {
const avatarMap = {
'warung': '👨‍🍳',
'ojek': '👩‍🏋️',
'alfamart': '👩‍💼',
'coffee_shop': '👨‍💼'
}
return avatarMap[type] || '👤'
}
}
}
</script>
<style scoped>
.scenario-container {
max-width: 1400px;
margin: 0 auto;
width: 100%;
padding: 0 2rem; /* Add horizontal padding to prevent border touching */
}
.scenario-header {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 2rem;
margin-bottom: 2rem;
box-shadow: var(--shadow-sm);
}
.scenario-header h2 {
font-family: 'Crimson Pro', serif;
font-size: 1.75rem;
font-weight: 600;
color: var(--text);
margin-bottom: 0.5rem;
letter-spacing: -0.01em;
}
.scenario-goal {
color: var(--text-light);
font-size: 1rem;
font-weight: 400;
}
.scenario-goal strong {
color: var(--primary);
font-weight: 500;
}
.conversation-area {
display: grid;
grid-template-columns: 1fr 320px;
gap: 2rem;
min-height: calc(100vh - 300px);
}
.left-panel {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.character-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
}
.character-header {
display: flex;
align-items: center;
gap: 1rem;
margin-bottom: 1rem;
}
.character-avatar {
width: 60px;
height: 60px;
background: var(--primary);
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.75rem;
box-shadow: var(--shadow);
position: relative;
}
.character-info h3 {
font-family: 'Crimson Pro', serif;
font-size: 1.25rem;
font-weight: 600;
color: var(--text);
margin: 0;
letter-spacing: -0.01em;
}
.character-description {
color: var(--text-light);
margin: 0;
line-height: 1.5;
font-size: 0.9rem;
}
.ai-chat-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
overflow: hidden;
box-shadow: var(--shadow-sm);
min-height: 500px;
}
.right-panel {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.context-section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.context-section::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--primary);
}
.context-section h3 {
font-size: 1rem;
font-weight: 600;
color: var(--text);
margin: 0 0 1rem 0;
display: flex;
align-items: center;
gap: 0.5rem;
}
.context-info {
display: flex;
flex-direction: column;
gap: 1rem;
}
.context-item {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.context-label {
font-size: 0.8rem;
font-weight: 500;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.05em;
}
.context-value {
color: var(--text-light);
line-height: 1.4;
font-size: 0.9rem;
}
.helpful-phrases {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 1.5rem;
box-shadow: var(--shadow-sm);
position: relative;
overflow: hidden;
}
.helpful-phrases::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 3px;
background: var(--accent);
}
.helpful-phrases h4 {
font-size: 1rem;
font-weight: 600;
color: var(--text);
margin: 0 0 1rem 0;
display: flex;
align-items: center;
gap: 0.5rem;
}
.phrases-list {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.phrase-item {
background: var(--surface-alt);
border: 1px solid var(--border);
border-radius: var(--radius);
padding: 1rem;
transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
cursor: pointer;
position: relative;
}
.phrase-item:hover {
border-color: var(--primary-light);
box-shadow: var(--shadow-sm);
transform: translateY(-1px);
}
.phrase-item::before {
content: '';
position: absolute;
left: 0;
top: 0;
bottom: 0;
width: 3px;
background: var(--accent);
border-radius: 0 2px 2px 0;
transform: scaleY(0);
transition: transform 0.2s ease;
}
.phrase-indonesian {
font-weight: 500;
color: var(--text);
margin-bottom: 0.25rem;
font-size: 0.9rem;
}
.phrase-english {
font-size: 0.8rem;
color: var(--text-muted);
font-style: italic;
opacity: 0.8;
}
@media (max-width: 1024px) {
.conversation-area {
grid-template-columns: 1fr;
gap: 1.5rem;
}
.character-section {
padding: 1.25rem;
}
.character-header {
justify-content: center;
text-align: center;
}
.ai-chat-section {
min-height: 400px;
}
}
@media (max-width: 768px) {
.scenario-container {
padding: 0 1rem; /* Reduce padding on mobile */
}
.scenario-header {
padding: 1.5rem;
}
.scenario-header h2 {
font-size: 1.5rem;
}
.character-section,
.context-section,
.helpful-phrases {
padding: 1rem;
}
.character-avatar {
width: 50px;
height: 50px;
font-size: 1.5rem;
}
.character-info h3 {
font-size: 1.1rem;
}
}
</style>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,16 @@
import { createApp } from 'vue'
import { createRouter, createWebHistory } from 'vue-router'
import App from './App.vue'
import ScenarioView from './components/ScenarioView.vue'
const routes = [
{ path: '/', redirect: '/scenario/warung' },
{ path: '/scenario/:type', component: ScenarioView, props: true }
]
const router = createRouter({
history: createWebHistory(),
routes
})
createApp(App).use(router).mount('#app')

View File

@ -0,0 +1,15 @@
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
export default defineConfig({
plugins: [vue()],
server: {
port: 3000,
proxy: {
'/api': {
target: 'http://localhost:8000',
changeOrigin: true
}
}
}
})

29
backend/.env.example Normal file
View File

@ -0,0 +1,29 @@
# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account-key.json
# OpenAI Configuration
OPENAI_API_KEY=your-openai-api-key-here
# Optional: OpenAI Model Configuration
OPENAI_MODEL=gpt-4o-mini
# Optional: Google Cloud Speech-to-Text Configuration
GOOGLE_CLOUD_PROJECT=your-project-id
SPEECH_LANGUAGE_CODE=id-ID
SPEECH_SAMPLE_RATE=48000
SPEECH_ENCODING=WEBM_OPUS
# Optional: Google Cloud Text-to-Speech Configuration
TTS_LANGUAGE_CODE=id-ID
TTS_VOICE_NAME=id-ID-Standard-A
TTS_VOICE_GENDER=FEMALE
TTS_SPEAKING_RATE=1.0
TTS_PITCH=0.0
# Optional: Server Configuration
HOST=0.0.0.0
PORT=8000
DEBUG=false
# Optional: CORS Configuration
CORS_ORIGINS=http://localhost:3000,http://localhost:5173

127
backend/.gitignore vendored Normal file
View File

@ -0,0 +1,127 @@
# Environment variables
.env
.env.local
.env.*.local
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
Pipfile.lock
# PEP 582
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Google Cloud credentials
*-key.json
*.json

42
backend/Dockerfile Normal file
View File

@ -0,0 +1,42 @@
FROM python:3.11-slim
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Set work directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
&& rm -rf /var/lib/apt/lists/*
# Install UV for faster Python package management
RUN pip install uv
# Copy pyproject.toml and uv.lock
COPY pyproject.toml uv.lock ./
# Install Python dependencies
RUN uv sync --frozen --no-dev
# Copy application code
COPY . .
# Create non-root user
RUN useradd --create-home --shell /bin/bash app
USER app
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/api/health || exit 1
# Run the application
CMD ["uv", "run", "python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

65
backend/config.py Normal file
View File

@ -0,0 +1,65 @@
import os
from typing import List
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class Config:
"""Configuration settings loaded from environment variables."""
# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS: str = os.getenv("GOOGLE_APPLICATION_CREDENTIALS", "")
GOOGLE_CLOUD_PROJECT: str = os.getenv("GOOGLE_CLOUD_PROJECT", "")
# OpenAI Configuration
OPENAI_API_KEY: str = os.getenv("OPENAI_API_KEY", "")
OPENAI_MODEL: str = os.getenv("OPENAI_MODEL", "gpt-4o-mini")
# Speech-to-Text Configuration
SPEECH_LANGUAGE_CODE: str = os.getenv("SPEECH_LANGUAGE_CODE", "id-ID")
SPEECH_SAMPLE_RATE: int = int(os.getenv("SPEECH_SAMPLE_RATE", "48000"))
SPEECH_ENCODING: str = os.getenv("SPEECH_ENCODING", "WEBM_OPUS")
# Text-to-Speech Configuration
TTS_LANGUAGE_CODE: str = os.getenv("TTS_LANGUAGE_CODE", "id-ID")
TTS_VOICE_NAME: str = os.getenv("TTS_VOICE_NAME", "id-ID-Standard-A")
TTS_VOICE_GENDER: str = os.getenv("TTS_VOICE_GENDER", "FEMALE")
TTS_SPEAKING_RATE: float = float(os.getenv("TTS_SPEAKING_RATE", "1.0"))
TTS_PITCH: float = float(os.getenv("TTS_PITCH", "0.0"))
# Server Configuration
HOST: str = os.getenv("HOST", "0.0.0.0")
PORT: int = int(os.getenv("PORT", "8000"))
DEBUG: bool = os.getenv("DEBUG", "false").lower() == "true"
# CORS Configuration
CORS_ORIGINS: List[str] = [
origin.strip()
for origin in os.getenv("CORS_ORIGINS", "http://localhost:3000,http://localhost:5173").split(",")
]
@classmethod
def validate(cls) -> None:
"""Validate required environment variables are set."""
required_vars = [
("OPENAI_API_KEY", cls.OPENAI_API_KEY),
]
missing_vars = []
for var_name, var_value in required_vars:
if not var_value:
missing_vars.append(var_name)
if missing_vars:
raise ValueError(f"Missing required environment variables: {', '.join(missing_vars)}")
# Warn about optional but recommended variables
if not cls.GOOGLE_APPLICATION_CREDENTIALS:
print("Warning: GOOGLE_APPLICATION_CREDENTIALS not set. Speech features may not work.")
if not cls.GOOGLE_CLOUD_PROJECT:
print("Warning: GOOGLE_CLOUD_PROJECT not set. Some Google Cloud features may not work.")
# Global config instance
config = Config()

127
backend/core/base_models.py Normal file
View File

@ -0,0 +1,127 @@
from pydantic import BaseModel
from typing import List, Optional, Dict
from enum import Enum
class HelpfulPhrase(BaseModel):
native: str
english: str
class CharacterType(str, Enum):
VENDOR = "vendor"
DRIVER = "driver"
CASHIER = "cashier"
OFFICIAL = "official"
NEIGHBOR = "neighbor"
SERVICE_WORKER = "service_worker"
GENERIC = "generic"
class PersonalityTone(str, Enum):
FRIENDLY = "friendly"
CASUAL = "casual"
FORMAL = "formal"
CHEERFUL = "cheerful"
BUSINESS_LIKE = "business_like"
SLEEPY = "sleepy"
CHATTY = "chatty"
GRUFF = "gruff"
HELPFUL = "helpful"
class Gender(str, Enum):
MALE = "male"
FEMALE = "female"
NEUTRAL = "neutral"
class GoalItem(BaseModel):
id: str
description: str
keywords: List[str] = []
completed: bool = False
class BasePersonality(BaseModel):
character_type: CharacterType
name: str
gender: Gender
tone: PersonalityTone
age_range: str
background: str
typical_phrases: List[str]
response_style: str
location_context: str
scenario_title: str
scenario_description: str
scenario_challenge: str
scenario_goal: str
goal_items: List[GoalItem]
helpful_phrases: List[HelpfulPhrase]
is_impatient: bool = False
is_helpful: bool = True
is_talkative: bool = True
uses_slang: bool = False
# Language-specific settings
language_code: str
country_code: str
def get_system_prompt(self, scenario_context: str = "", language_specific_instructions: str = "") -> str:
"""Generate a system prompt based on this personality."""
casualness_note = f"""
SPEAKING STYLE - BE VERY CASUAL AND NATURAL:
- Use everyday {self.language_code.upper()} like real people do
- Drop formal words when people actually don't use them
- Use contractions and casual speech patterns
- Speak like you're talking to a friend or regular customer
- Don't be overly polite or formal - be natural and relaxed
- Sound like real street conversation
{language_specific_instructions}
"""
interaction_guide = self._get_interaction_guide()
base_prompt = f"""You are {self.name}, a real {self.character_type.value.replace('_', ' ')} in {self.country_code}. You talk like a normal person - casual, natural, and relaxed.
SCENARIO CONTEXT:
📍 {self.scenario_title}
🎯 What's happening: {self.scenario_description}
Challenge: {self.scenario_challenge}
🏆 Goal: {self.scenario_goal}
{casualness_note}
CHARACTER:
- {self.name} ({self.age_range} {self.character_type.value.replace('_', ' ')})
- {self.background}
- Works at: {self.location_context}
- Personality: {self.tone.value}, {'talkative' if self.is_talkative else 'quiet'}, {'helpful' if self.is_helpful else 'business-focused'}
YOUR TYPICAL PHRASES (use these naturally):
{chr(10).join(f'- {phrase}' for phrase in self.typical_phrases)}
CRITICAL RULES - READ CONVERSATION HISTORY CAREFULLY:
1. You are {self.name} - NOT a teacher, NOT formal, just a real person in this scenario
2. Speak casual {self.language_code.upper()} like in real life - very relaxed and natural
3. Keep responses SHORT (5-10 words max, like real conversation)
4. READ THE CONVERSATION HISTORY ABOVE - remember what was already asked and answered
5. NEVER repeat questions you already asked - check what was said before
6. TRACK the interaction progress - move naturally through the process based on what's been discussed
7. Stay relevant to your role and what customers need from you in this scenario
8. If customer already answered a question, move to the NEXT step in the process
9. Help the customer achieve their goal: {self.scenario_goal}
{interaction_guide}
ADDITIONAL CONTEXT: {scenario_context}
IMPORTANT: Look at the conversation history above before responding! Don't ask questions that were already answered. Continue naturally from where the conversation left off! Help them complete their goal in this scenario."""
return base_prompt
def _get_interaction_guide(self) -> str:
"""Override in language-specific implementations"""
return """
INTERACTION FLOW:
- Respond naturally to customer needs
- Help them with whatever service you provide
- Keep conversation relevant to your role
"""

View File

@ -0,0 +1,443 @@
import asyncio
import json
import os
import logging
from typing import AsyncGenerator, Dict, Any, Optional, List
import base64
from google.cloud import speech
from google.cloud import texttospeech
from google.api_core import exceptions
import openai
from config import config
logger = logging.getLogger(__name__)
class SpeechToTextService:
def __init__(self, language_code: str = "en-US"):
self.client = speech.SpeechClient()
self.language_code = language_code
encoding_map = {
"WEBM_OPUS": speech.RecognitionConfig.AudioEncoding.WEBM_OPUS,
"LINEAR16": speech.RecognitionConfig.AudioEncoding.LINEAR16,
"FLAC": speech.RecognitionConfig.AudioEncoding.FLAC,
"MULAW": speech.RecognitionConfig.AudioEncoding.MULAW,
"AMR": speech.RecognitionConfig.AudioEncoding.AMR,
"AMR_WB": speech.RecognitionConfig.AudioEncoding.AMR_WB,
"OGG_OPUS": speech.RecognitionConfig.AudioEncoding.OGG_OPUS,
"MP3": speech.RecognitionConfig.AudioEncoding.MP3,
}
self.recognition_config = speech.RecognitionConfig(
encoding=encoding_map.get(config.SPEECH_ENCODING, speech.RecognitionConfig.AudioEncoding.WEBM_OPUS),
sample_rate_hertz=config.SPEECH_SAMPLE_RATE,
language_code=self.language_code,
enable_automatic_punctuation=True,
use_enhanced=True,
model="latest_long",
)
self.streaming_config = speech.StreamingRecognitionConfig(
config=self.recognition_config,
interim_results=True,
single_utterance=False,
)
async def transcribe_streaming(self, audio_generator: AsyncGenerator[bytes, None]) -> AsyncGenerator[Dict[str, Any], None]:
"""Stream audio data to Google Cloud Speech-to-Text and yield transcription results."""
try:
async def request_generator():
yield speech.StreamingRecognizeRequest(streaming_config=self.streaming_config)
async for chunk in audio_generator:
yield speech.StreamingRecognizeRequest(audio_content=chunk)
responses = self.client.streaming_recognize(request_generator())
for response in responses:
for result in response.results:
transcript = result.alternatives[0].transcript
is_final = result.is_final
yield {
"type": "transcription",
"transcript": transcript,
"is_final": is_final,
"confidence": result.alternatives[0].confidence if is_final else 0.0
}
except exceptions.GoogleAPICallError as e:
yield {
"type": "error",
"message": f"Speech recognition error: {str(e)}"
}
class TextToSpeechService:
def __init__(self, language_code: str = "en-US"):
self.client = texttospeech.TextToSpeechClient()
self.language_code = language_code
self.gender_map = {
"FEMALE": texttospeech.SsmlVoiceGender.FEMALE,
"MALE": texttospeech.SsmlVoiceGender.MALE,
"NEUTRAL": texttospeech.SsmlVoiceGender.NEUTRAL,
"male": texttospeech.SsmlVoiceGender.MALE,
"female": texttospeech.SsmlVoiceGender.FEMALE,
}
def _get_voice_config(self, gender: str, character_name: str = None) -> Dict[str, Any]:
"""Override this method in language-specific implementations"""
tts_gender = self.gender_map.get(gender, texttospeech.SsmlVoiceGender.FEMALE)
return {
"name": f"{self.language_code}-Standard-A",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": tts_gender,
}
def _get_voice_and_audio_config(self, gender: str, character_name: str = None) -> tuple:
"""Get appropriate voice and audio configuration based on gender."""
config_set = self._get_voice_config(gender, character_name)
voice = texttospeech.VoiceSelectionParams(
language_code=self.language_code,
name=config_set["name"],
ssml_gender=config_set["ssml_gender"],
)
audio_config_params = {
"audio_encoding": texttospeech.AudioEncoding.MP3, # MP3 for faster processing
"speaking_rate": config_set["speaking_rate"],
# Remove effects profile for faster generation
}
if config_set["pitch"] is not None:
audio_config_params["pitch"] = config_set["pitch"]
audio_config = texttospeech.AudioConfig(**audio_config_params)
return voice, audio_config
async def synthesize_speech(self, text: str, gender: str = "female", character_name: str = None) -> bytes:
"""Convert text to speech using Google Cloud Text-to-Speech."""
try:
logger.info(f"TTS synthesize_speech called with text: '{text}', gender: '{gender}', character: '{character_name}'")
voice, audio_config = self._get_voice_and_audio_config(gender, character_name)
logger.info(f"Using voice: {voice.name}, language: {self.language_code}")
synthesis_input = texttospeech.SynthesisInput(text=text)
response = self.client.synthesize_speech(
input=synthesis_input,
voice=voice,
audio_config=audio_config,
)
logger.info(f"TTS successful, audio length: {len(response.audio_content)} bytes")
return response.audio_content
except exceptions.GoogleAPICallError as e:
logger.error(f"Text-to-speech error: {str(e)}")
raise Exception(f"Text-to-speech error: {str(e)}")
class BaseAIConversationService:
def __init__(self, language_code: str = "en"):
self.client = openai.OpenAI(api_key=config.OPENAI_API_KEY)
self.model = config.OPENAI_MODEL
self.language_code = language_code
self.current_personality = None
self.conversation_history: List[Dict[str, str]] = []
self.goal_progress: List = []
def set_personality(self, personality):
"""Set the current personality for the conversation."""
self.current_personality = personality
self.conversation_history = []
if hasattr(personality, 'goal_items'):
self.goal_progress = [item.dict() for item in personality.goal_items]
def reset_conversation(self):
"""Reset the conversation history."""
self.conversation_history = []
if self.current_personality and hasattr(self.current_personality, 'goal_items'):
self.goal_progress = [item.dict() for item in self.current_personality.goal_items]
def get_personality_for_scenario(self, scenario: str, character_name: str = None):
"""Override in language-specific implementations"""
raise NotImplementedError("Must be implemented by language-specific service")
async def check_goal_completion(self, user_message: str, ai_response: str) -> bool:
"""Check if any goals are completed using LLM judge."""
if not self.goal_progress:
return False
goals_completed = False
incomplete_goals = [g for g in self.goal_progress if not g.get('completed', False)]
if not incomplete_goals:
return False
logger.info(f"Checking goal completion for user message: '{user_message}'")
conversation_context = ""
for exchange in self.conversation_history[-3:]:
conversation_context += f"User: {exchange['user']}\nAI: {exchange['assistant']}\n"
for goal in incomplete_goals:
completion_check = await self._judge_goal_completion(
goal,
user_message,
ai_response,
conversation_context
)
if completion_check:
goal['completed'] = True
goals_completed = True
logger.info(f"✅ Goal completed: {goal['description']}")
return goals_completed
async def _judge_goal_completion(self, goal, user_message: str, ai_response: str, conversation_context: str) -> bool:
"""Use LLM to judge if a specific goal was completed."""
try:
if "order" in goal['description'].lower() or "buy" in goal['description'].lower():
judge_prompt = f"""You are a strict judge determining if a specific goal was FULLY completed in a conversation.
GOAL TO CHECK: {goal['description']}
RECENT CONVERSATION CONTEXT:
{conversation_context}
LATEST EXCHANGE:
User: {user_message}
AI: {ai_response}
CRITICAL RULES FOR ORDERING GOALS:
1. ONLY return "YES" if the user has COMPLETELY finished this exact goal
2. Return "NO" if the goal is partial, incomplete, or just being discussed
3. For "Order [item]" goals: user must explicitly say they want/order that EXACT item
4. Don't mark as complete just because the AI is asking about it
Answer ONLY "YES" or "NO":"""
else:
judge_prompt = f"""You are judging if a conversational goal was completed in a natural conversation scenario.
GOAL TO CHECK: {goal['description']}
RECENT CONVERSATION CONTEXT:
{conversation_context}
LATEST EXCHANGE:
User: {user_message}
AI: {ai_response}
RULES FOR CONVERSATION GOALS:
1. Return "YES" if the user has naturally accomplished this conversational goal
2. Goals can be completed through natural conversation flow
3. Check the FULL conversation context, not just the latest exchange
Answer ONLY "YES" or "NO":"""
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": judge_prompt}],
max_tokens=5,
temperature=0.1,
)
result = response.choices[0].message.content.strip().upper()
return result == "YES"
except Exception as e:
logger.error(f"Error in goal completion judge: {str(e)}")
return False
def are_all_goals_completed(self) -> bool:
"""Check if all goals are completed."""
return all(goal.get('completed', False) for goal in self.goal_progress)
def get_goal_status(self) -> Dict[str, Any]:
"""Get current goal status."""
return {
"scenario_goal": self.current_personality.scenario_goal if self.current_personality else "",
"goal_items": [
{
"id": goal.get('id'),
"description": goal.get('description'),
"completed": goal.get('completed', False)
} for goal in self.goal_progress
],
"all_completed": self.are_all_goals_completed()
}
async def get_goal_status_async(self) -> Dict[str, Any]:
"""Async version of get_goal_status for parallel processing."""
return self.get_goal_status()
async def get_response(self, user_message: str, context: str = "") -> str:
"""Get AI response to user message using current personality."""
try:
if not self.current_personality:
raise Exception("No personality set")
system_prompt = self.current_personality.get_system_prompt(context)
messages = [{"role": "system", "content": system_prompt}]
recent_history = self.conversation_history[-8:] if len(self.conversation_history) > 8 else self.conversation_history
for exchange in recent_history:
messages.append({"role": "user", "content": exchange["user"]})
messages.append({"role": "assistant", "content": exchange["assistant"]})
messages.append({"role": "user", "content": user_message})
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
max_tokens=250,
temperature=0.7,
)
ai_response = response.choices[0].message.content
self.conversation_history.append({
"user": user_message,
"assistant": ai_response
})
await self.check_goal_completion(user_message, ai_response)
return ai_response
except Exception as e:
return f"Sorry, there was an error: {str(e)}"
class BaseConversationFlowService:
def __init__(self, language_code: str = "en-US"):
self.language_code = language_code
self.stt_service = SpeechToTextService(language_code)
self.tts_service = TextToSpeechService(language_code)
self.ai_service = BaseAIConversationService(language_code.split('-')[0])
def set_scenario_personality(self, scenario: str, character_name: str = None):
"""Set the personality based on scenario and character."""
personality = self.ai_service.get_personality_for_scenario(scenario, character_name)
if not self.ai_service.current_personality or self.ai_service.current_personality.name != personality.name:
logger.info(f"Setting new personality: {personality.name}")
self.ai_service.set_personality(personality)
async def generate_initial_greeting(self, scenario_context: str = "") -> Dict[str, Any]:
"""Generate initial greeting from character."""
try:
scenario = self.extract_scenario_from_context(scenario_context)
if scenario:
self.set_scenario_personality(scenario)
# Generate greeting based on personality
personality = self.ai_service.current_personality
if personality and personality.typical_phrases:
greeting = personality.typical_phrases[0] # Use first typical phrase
else:
greeting = "Hello!"
# Generate audio
gender = personality.gender.value if personality else "female"
personality_name = personality.name if personality else "Character"
audio_content = await self.tts_service.synthesize_speech(greeting, gender, personality_name)
audio_base64 = base64.b64encode(audio_content).decode('utf-8')
return {
"type": "ai_response",
"text": greeting,
"audio": audio_base64,
"audio_format": "mp3",
"character": personality_name,
"is_initial_greeting": True
}
except Exception as e:
return {
"type": "error",
"message": f"Initial greeting error: {str(e)}"
}
async def process_conversation_flow_fast(self, transcribed_text: str, scenario_context: str = "") -> Dict[str, Any]:
"""Fast conversation flow with parallel processing."""
try:
scenario = self.extract_scenario_from_context(scenario_context)
if scenario:
self.set_scenario_personality(scenario)
# Get personality info early
gender = self.ai_service.current_personality.gender.value if self.ai_service.current_personality else "female"
personality_name = self.ai_service.current_personality.name if self.ai_service.current_personality else "Unknown"
# Start AI response generation and goal checking in parallel
ai_task = asyncio.create_task(self.ai_service.get_response(transcribed_text, scenario_context))
goal_task = asyncio.create_task(self.ai_service.get_goal_status_async())
# Wait for AI response
ai_response = await ai_task
# Start TTS immediately while goal processing might still be running
tts_task = asyncio.create_task(self.tts_service.synthesize_speech(ai_response, gender, personality_name))
# Get goal status (might already be done)
goal_status = await goal_task
# Wait for TTS to complete
audio_content = await tts_task
audio_base64 = base64.b64encode(audio_content).decode('utf-8')
return {
"type": "ai_response",
"text": ai_response,
"audio": audio_base64,
"audio_format": "mp3",
"character": personality_name,
"goal_status": goal_status,
"conversation_complete": goal_status.get("all_completed", False)
}
except Exception as e:
return {
"type": "error",
"message": f"Conversation flow error: {str(e)}"
}
async def process_conversation_flow(self, transcribed_text: str, scenario_context: str = "") -> Dict[str, Any]:
"""Process the complete conversation flow: Text → AI → Speech."""
try:
scenario = self.extract_scenario_from_context(scenario_context)
if scenario:
self.set_scenario_personality(scenario)
ai_response = await self.ai_service.get_response(transcribed_text, scenario_context)
gender = self.ai_service.current_personality.gender.value if self.ai_service.current_personality else "female"
personality_name = self.ai_service.current_personality.name if self.ai_service.current_personality else "Unknown"
audio_content = await self.tts_service.synthesize_speech(ai_response, gender, personality_name)
audio_base64 = base64.b64encode(audio_content).decode('utf-8')
goal_status = self.ai_service.get_goal_status()
return {
"type": "ai_response",
"text": ai_response,
"audio": audio_base64,
"audio_format": "mp3",
"character": personality_name,
"goal_status": goal_status,
"conversation_complete": goal_status.get("all_completed", False)
}
except Exception as e:
return {
"type": "error",
"message": f"Conversation flow error: {str(e)}"
}
def extract_scenario_from_context(self, context: str) -> str:
"""Override in language-specific implementations"""
return "default"

View File

@ -0,0 +1,56 @@
"""German language configuration for Street Lingo"""
# German TTS Configuration
TTS_LANGUAGE_CODE = "de-DE"
SPEECH_LANGUAGE_CODE = "de-DE"
# German-specific settings
DEFAULT_SCENARIO = "spati"
COUNTRY_NAME = "Germany"
LANGUAGE_NAME = "German"
LOCALE = "de_DE"
# Currency and units
CURRENCY_SYMBOL = ""
DISTANCE_UNIT = "km"
# Cultural settings
FORMAL_ADDRESS = True # Use Sie/du distinction
TIME_FORMAT = "24h"
DATE_FORMAT = "DD.MM.YYYY"
# Berlin-specific settings
CITY_NAME = "Berlin"
TRANSPORT_SYSTEM = "BVG"
COMMON_DISTRICTS = [
"Mitte",
"Kreuzberg",
"Friedrichshain",
"Prenzlauer Berg",
"Charlottenburg",
"Neukölln",
"Schöneberg"
]
# Common German expressions for the AI to understand
COMMON_EXPRESSIONS = {
"greeting": ["Hallo", "Guten Tag", "Moin", "Servus"],
"goodbye": ["Tschüss", "Auf Wiedersehen", "Bis bald", "Ciao"],
"please": ["Bitte", "Bitte schön"],
"thank_you": ["Danke", "Danke schön", "Vielen Dank"],
"excuse_me": ["Entschuldigung", "Entschuldigen Sie"],
"yes": ["Ja", "Jawohl", "Genau"],
"no": ["Nein", "", "Nicht"],
"maybe": ["Vielleicht", "Kann sein", "Möglich"]
}
# Berlin slang and expressions
BERLIN_SLANG = {
"cool": ["krass", "geil", "nice"],
"annoying": ["ätzend", "nervig"],
"money": ["Kohle", "Kröten", "Moos"],
"food": ["Futter", "Grub"],
"party": ["feiern", "abgehen"],
"work": ["malochen", "schaffen"],
"tired": ["platt", "fertig"]
}

View File

@ -0,0 +1,287 @@
from core.base_models import BasePersonality, CharacterType, PersonalityTone, Gender, GoalItem, HelpfulPhrase
class GermanPersonality(BasePersonality):
def __init__(self, **data):
data['language_code'] = "de"
data['country_code'] = "Germany"
super().__init__(**data)
def get_system_prompt(self, scenario_context: str = "") -> str:
"""Generate a system prompt for German conversations."""
german_instructions = """
- Use informal "du" unless it's a formal bureaucratic setting
- Use common German contractions like "ich hab" instead of "ich habe"
- Include Berlin slang and expressions when appropriate
- Sound like a real Berliner - direct but friendly
- Use "ne?" for confirmation questions
- Include typical Berlin expressions
"""
return super().get_system_prompt(scenario_context, german_instructions)
# Berlin-specific scenarios
SPATI_PERSONALITIES = {
"mehmet": GermanPersonality(
character_type=CharacterType.VENDOR,
name="Mehmet",
gender=Gender.MALE,
tone=PersonalityTone.CASUAL,
age_range="middle-aged",
background="Turkish-German Späti owner who's been in Berlin for 20 years",
typical_phrases=[
"Hallo, was brauchst du?",
"Alles klar?",
"Geht klar",
"Machst du",
"Schönen Abend noch",
"Bis später",
"Kein Problem",
"Läuft"
],
response_style="Friendly but direct, knows his regulars",
location_context="24/7 Späti in Kreuzberg",
scenario_title="At a Späti",
scenario_description="You're at a Berlin Späti (convenience store) buying late-night essentials. Practice ordering drinks, snacks, and everyday items in German.",
scenario_challenge="Understanding Berlin street German, dealing with informal language, and navigating the unique Späti culture.",
scenario_goal="Buy a beer and some snacks",
goal_items=[
GoalItem(
id="buy_beer",
description="Buy a beer (Bier kaufen)"
),
GoalItem(
id="buy_snacks",
description="Buy some snacks (Snacks kaufen)"
)
],
helpful_phrases=[
HelpfulPhrase(native="Ich hätte gern...", english="I would like..."),
HelpfulPhrase(native="Was kostet das?", english="How much does this cost?"),
HelpfulPhrase(native="Haben Sie...?", english="Do you have...?"),
HelpfulPhrase(native="Ein Bier, bitte", english="A beer, please"),
HelpfulPhrase(native="Danke schön", english="Thank you"),
HelpfulPhrase(native="Chips", english="Chips"),
HelpfulPhrase(native="Bezahlen", english="To pay")
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
WG_PERSONALITIES = {
"lisa": GermanPersonality(
character_type=CharacterType.NEIGHBOR,
name="Lisa",
gender=Gender.FEMALE,
tone=PersonalityTone.FRIENDLY,
age_range="young",
background="Berlin student showing her WG room to potential flatmates",
typical_phrases=[
"Hallo! Du bist wegen des Zimmers hier?",
"Komm rein!",
"Das ist unser Wohnzimmer",
"Die Küche teilen wir alle",
"Wir sind eine entspannte WG",
"Hast du Fragen?",
"Das würde monatlich kosten...",
"Wir melden uns bei dir"
],
response_style="Friendly but assessing compatibility for shared living",
location_context="Shared apartment in Prenzlauer Berg",
scenario_title="WG Room Viewing",
scenario_description="You're viewing a room in a Berlin shared apartment (WG). Practice asking about living arrangements, rent, and house rules in German.",
scenario_challenge="Understanding housing terminology, asking appropriate questions about shared living, and presenting yourself as a good flatmate.",
scenario_goal="Ask about rent, house rules, and express interest",
goal_items=[
GoalItem(
id="ask_rent",
description="Ask about monthly rent (Nach der Miete fragen)"
),
GoalItem(
id="ask_house_rules",
description="Ask about house rules (Nach Hausregeln fragen)"
),
GoalItem(
id="express_interest",
description="Express interest in the room (Interesse zeigen)"
)
],
helpful_phrases=[
HelpfulPhrase(native="Wie viel kostet das Zimmer?", english="How much does the room cost?"),
HelpfulPhrase(native="Sind Nebenkosten inklusive?", english="Are utilities included?"),
HelpfulPhrase(native="Wie ist die Hausordnung?", english="What are the house rules?"),
HelpfulPhrase(native="Wann kann ich einziehen?", english="When can I move in?"),
HelpfulPhrase(native="Das gefällt mir", english="I like it"),
HelpfulPhrase(native="Ich würde gerne hier wohnen", english="I would like to live here"),
HelpfulPhrase(native="Kaltmiete", english="Base rent"),
HelpfulPhrase(native="Warmmiete", english="Rent including utilities")
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
BURGERAMT_PERSONALITIES = {
"frau_schmidt": GermanPersonality(
character_type=CharacterType.OFFICIAL,
name="Frau Schmidt",
gender=Gender.FEMALE,
tone=PersonalityTone.FORMAL,
age_range="middle-aged",
background="Experienced civil servant at Berlin Bürgeramt",
typical_phrases=[
"Guten Tag, womit kann ich Ihnen helfen?",
"Haben Sie einen Termin?",
"Welche Dokumente haben Sie dabei?",
"Das müssen Sie ausfüllen",
"Unterschreiben Sie bitte hier",
"Das kostet 28 Euro",
"In 2-3 Wochen bekommen Sie Post",
"Auf Wiedersehen"
],
response_style="Professional, formal, efficient but can be helpful",
location_context="Berlin Bürgeramt office",
scenario_title="At the Bürgeramt",
scenario_description="You're at the Berlin Bürgeramt (civil services office) to register your address. Practice dealing with German bureaucracy and formal language.",
scenario_challenge="Understanding formal German, bureaucratic terminology, and navigating the registration process.",
scenario_goal="Complete address registration (Anmeldung)",
goal_items=[
GoalItem(
id="explain_purpose",
description="Explain you need to register your address (Anmeldung erklären)"
),
GoalItem(
id="provide_documents",
description="Provide required documents (Dokumente vorlegen)"
),
GoalItem(
id="complete_form",
description="Complete the registration form (Formular ausfüllen)"
)
],
helpful_phrases=[
HelpfulPhrase(native="Ich möchte mich anmelden", english="I want to register my address"),
HelpfulPhrase(native="Ich bin neu in Berlin", english="I'm new to Berlin"),
HelpfulPhrase(native="Welche Dokumente brauche ich?", english="What documents do I need?"),
HelpfulPhrase(native="Personalausweis", english="Identity card"),
HelpfulPhrase(native="Mietvertrag", english="Rental contract"),
HelpfulPhrase(native="Wohnungsgeberbestätigung", english="Landlord confirmation"),
HelpfulPhrase(native="Wie lange dauert das?", english="How long does it take?"),
HelpfulPhrase(native="Anmeldung", english="Address registration")
],
is_helpful=True,
is_talkative=False,
uses_slang=False
)
}
BIERGARTEN_PERSONALITIES = {
"klaus": GermanPersonality(
character_type=CharacterType.SERVICE_WORKER,
name="Klaus",
gender=Gender.MALE,
tone=PersonalityTone.CHEERFUL,
age_range="middle-aged",
background="Experienced Biergarten server who loves his job",
typical_phrases=[
"Hallo! Habt ihr schon gewählt?",
"Was darf's denn sein?",
"Möchtet ihr was zu essen dazu?",
"Eine Maß Bier?",
"Kommt sofort!",
"Prost!",
"Schmeckt's euch?",
"Zahlen zusammen oder getrennt?"
],
response_style="Cheerful and traditional, enjoys chatting with customers",
location_context="Traditional Biergarten in Tiergarten",
scenario_title="At a Biergarten",
scenario_description="You're at a Berlin Biergarten ordering food and drinks. Practice ordering in German and understanding traditional beer garden culture.",
scenario_challenge="Understanding German beer terminology, food options, and traditional Biergarten etiquette.",
scenario_goal="Order a beer and traditional German food",
goal_items=[
GoalItem(
id="order_beer",
description="Order a beer (Bier bestellen)"
),
GoalItem(
id="order_food",
description="Order traditional German food (Deutsches Essen bestellen)"
)
],
helpful_phrases=[
HelpfulPhrase(native="Eine Maß Bier, bitte", english="A liter of beer, please"),
HelpfulPhrase(native="Was empfehlen Sie?", english="What do you recommend?"),
HelpfulPhrase(native="Ich hätte gern...", english="I would like..."),
HelpfulPhrase(native="Schweinebraten", english="Roast pork"),
HelpfulPhrase(native="Schnitzel", english="Schnitzel"),
HelpfulPhrase(native="Sauerkraut", english="Sauerkraut"),
HelpfulPhrase(native="Die Rechnung, bitte", english="The bill, please"),
HelpfulPhrase(native="Prost!", english="Cheers!")
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
UBAHN_PERSONALITIES = {
"bvg_info": GermanPersonality(
character_type=CharacterType.SERVICE_WORKER,
name="BVG Mitarbeiter",
gender=Gender.MALE,
tone=PersonalityTone.HELPFUL,
age_range="young",
background="Helpful BVG information staff at U-Bahn station",
typical_phrases=[
"Kann ich Ihnen helfen?",
"Wohin möchten Sie denn?",
"Nehmen Sie die U6 Richtung...",
"Steigen Sie an... um",
"Das sind drei Stationen",
"Brauchen Sie eine Fahrkarte?",
"Zone AB reicht",
"Gute Fahrt!"
],
response_style="Professional and helpful with public transport",
location_context="U-Bahn station information desk",
scenario_title="U-Bahn Help",
scenario_description="You're at a Berlin U-Bahn station asking for directions and transport information. Practice asking about public transport in German.",
scenario_challenge="Understanding German public transport terminology, directions, and ticket system.",
scenario_goal="Get directions and buy appropriate ticket",
goal_items=[
GoalItem(
id="ask_directions",
description="Ask for directions (Nach dem Weg fragen)"
),
GoalItem(
id="buy_ticket",
description="Buy appropriate ticket (Passende Fahrkarte kaufen)"
)
],
helpful_phrases=[
HelpfulPhrase(native="Wie komme ich nach...?", english="How do I get to...?"),
HelpfulPhrase(native="Welche Linie muss ich nehmen?", english="Which line do I need to take?"),
HelpfulPhrase(native="Wo muss ich umsteigen?", english="Where do I need to change?"),
HelpfulPhrase(native="Wie viele Stationen?", english="How many stations?"),
HelpfulPhrase(native="Welche Fahrkarte brauche ich?", english="Which ticket do I need?"),
HelpfulPhrase(native="Einzelfahrkarte", english="Single ticket"),
HelpfulPhrase(native="Tageskarte", english="Day ticket"),
HelpfulPhrase(native="Richtung", english="Direction")
],
is_helpful=True,
is_talkative=False,
uses_slang=False
)
}
# Dictionary to easily access personalities by scenario
SCENARIO_PERSONALITIES = {
"spati": SPATI_PERSONALITIES,
"wg_viewing": WG_PERSONALITIES,
"burgeramt": BURGERAMT_PERSONALITIES,
"biergarten": BIERGARTEN_PERSONALITIES,
"ubahn": UBAHN_PERSONALITIES
}

View File

@ -0,0 +1,138 @@
import logging
from typing import Dict, Any
from google.cloud import texttospeech
from core.speech_service import TextToSpeechService, BaseAIConversationService, BaseConversationFlowService
from .models import SCENARIO_PERSONALITIES, GermanPersonality
logger = logging.getLogger(__name__)
class GermanTextToSpeechService(TextToSpeechService):
def __init__(self):
super().__init__(language_code="de-DE")
def _get_voice_config(self, gender: str, character_name: str = None) -> Dict[str, Any]:
"""Get German-specific voice configuration."""
tts_gender = self.gender_map.get(gender, texttospeech.SsmlVoiceGender.FEMALE)
# Character-specific German voices using Chirp3-HD models
character_voice_map = {
"Mehmet": {
"name": "de-DE-Chirp3-HD-Charon", # Male voice with slight accent
"speaking_rate": 0.95,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
"Lisa": {
"name": "de-DE-Chirp3-HD-Kore", # Young female voice
"speaking_rate": 1.05,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Frau Schmidt": {
"name": "de-DE-Chirp3-HD-Zephyr", # Formal female voice
"speaking_rate": 0.9,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Klaus": {
"name": "de-DE-Chirp3-HD-Puck", # Cheerful male voice
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
"BVG Mitarbeiter": {
"name": "de-DE-Chirp3-HD-Fenrir", # Professional male voice
"speaking_rate": 0.95,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
}
}
# Generic German voices by gender using Chirp3-HD models
gender_voice_fallback = {
texttospeech.SsmlVoiceGender.MALE: {
"name": "de-DE-Chirp3-HD-Charon",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
texttospeech.SsmlVoiceGender.FEMALE: {
"name": "de-DE-Chirp3-HD-Kore",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
}
}
if character_name and character_name in character_voice_map:
config_set = character_voice_map[character_name]
logger.info(f"Using character-specific German voice for '{character_name}': {config_set['name']}")
return config_set
config_set = gender_voice_fallback.get(tts_gender, gender_voice_fallback[texttospeech.SsmlVoiceGender.FEMALE])
logger.info(f"Using German gender fallback voice for {tts_gender}: {config_set['name']}")
return config_set
class GermanAIConversationService(BaseAIConversationService):
def __init__(self):
super().__init__(language_code="de")
def get_personality_for_scenario(self, scenario: str, character_name: str = None) -> GermanPersonality:
"""Get German personality based on scenario and character name."""
if scenario in SCENARIO_PERSONALITIES:
personalities = SCENARIO_PERSONALITIES[scenario]
if character_name and character_name in personalities:
return personalities[character_name]
else:
return list(personalities.values())[0]
# Return default personality if scenario not found
from .models import GermanPersonality, CharacterType, Gender, PersonalityTone, GoalItem, HelpfulPhrase
return GermanPersonality(
character_type=CharacterType.GENERIC,
name="Herr/Frau Müller",
gender=Gender.FEMALE,
tone=PersonalityTone.FRIENDLY,
age_range="middle-aged",
background="Helpful Berlin resident",
typical_phrases=["Hallo!", "Wie geht's?", "Kann ich helfen?"],
response_style="Friendly and helpful",
location_context="Berlin",
scenario_title="General Conversation",
scenario_description="General German conversation practice",
scenario_challenge="Practice basic German conversation",
scenario_goal="Have a natural conversation in German",
goal_items=[],
helpful_phrases=[],
is_helpful=True,
is_talkative=True
)
class GermanConversationFlowService(BaseConversationFlowService):
def __init__(self):
super().__init__(language_code="de-DE")
self.tts_service = GermanTextToSpeechService()
self.ai_service = GermanAIConversationService()
def extract_scenario_from_context(self, context: str) -> str:
"""Extract scenario type from context string."""
logger.info(f"Extracting German scenario from context: '{context}'")
context_lower = context.lower()
detected_scenario = None
if "spati" in context_lower or "späti" in context_lower or "convenience" in context_lower:
detected_scenario = "spati"
elif "wg" in context_lower or "room" in context_lower or "apartment" in context_lower:
detected_scenario = "wg_viewing"
elif "bürgeramt" in context_lower or "burgeramt" in context_lower or "registration" in context_lower:
detected_scenario = "burgeramt"
elif "biergarten" in context_lower or "beer" in context_lower or "restaurant" in context_lower:
detected_scenario = "biergarten"
elif "ubahn" in context_lower or "u-bahn" in context_lower or "transport" in context_lower:
detected_scenario = "ubahn"
else:
detected_scenario = "spati" # Default to späti
logger.info(f"Detected German scenario: '{detected_scenario}'")
return detected_scenario

View File

@ -0,0 +1,65 @@
import os
from typing import List
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class Config:
"""Configuration settings loaded from environment variables."""
# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS: str = os.getenv("GOOGLE_APPLICATION_CREDENTIALS", "")
GOOGLE_CLOUD_PROJECT: str = os.getenv("GOOGLE_CLOUD_PROJECT", "")
# OpenAI Configuration
OPENAI_API_KEY: str = os.getenv("OPENAI_API_KEY", "")
OPENAI_MODEL: str = os.getenv("OPENAI_MODEL", "gpt-4o-mini")
# Speech-to-Text Configuration
SPEECH_LANGUAGE_CODE: str = os.getenv("SPEECH_LANGUAGE_CODE", "id-ID")
SPEECH_SAMPLE_RATE: int = int(os.getenv("SPEECH_SAMPLE_RATE", "48000"))
SPEECH_ENCODING: str = os.getenv("SPEECH_ENCODING", "WEBM_OPUS")
# Text-to-Speech Configuration
TTS_LANGUAGE_CODE: str = os.getenv("TTS_LANGUAGE_CODE", "id-ID")
TTS_VOICE_NAME: str = os.getenv("TTS_VOICE_NAME", "id-ID-Standard-A")
TTS_VOICE_GENDER: str = os.getenv("TTS_VOICE_GENDER", "FEMALE")
TTS_SPEAKING_RATE: float = float(os.getenv("TTS_SPEAKING_RATE", "1.0"))
TTS_PITCH: float = float(os.getenv("TTS_PITCH", "0.0"))
# Server Configuration
HOST: str = os.getenv("HOST", "0.0.0.0")
PORT: int = int(os.getenv("PORT", "8000"))
DEBUG: bool = os.getenv("DEBUG", "false").lower() == "true"
# CORS Configuration
CORS_ORIGINS: List[str] = [
origin.strip()
for origin in os.getenv("CORS_ORIGINS", "http://localhost:3000,http://localhost:5173").split(",")
]
@classmethod
def validate(cls) -> None:
"""Validate required environment variables are set."""
required_vars = [
("OPENAI_API_KEY", cls.OPENAI_API_KEY),
]
missing_vars = []
for var_name, var_value in required_vars:
if not var_value:
missing_vars.append(var_name)
if missing_vars:
raise ValueError(f"Missing required environment variables: {', '.join(missing_vars)}")
# Warn about optional but recommended variables
if not cls.GOOGLE_APPLICATION_CREDENTIALS:
print("Warning: GOOGLE_APPLICATION_CREDENTIALS not set. Speech features may not work.")
if not cls.GOOGLE_CLOUD_PROJECT:
print("Warning: GOOGLE_CLOUD_PROJECT not set. Some Google Cloud features may not work.")
# Global config instance
config = Config()

View File

@ -0,0 +1,354 @@
from enum import Enum
from typing import List
from core.base_models import BasePersonality, CharacterType, PersonalityTone, Gender, GoalItem, HelpfulPhrase
class IndonesianPersonality(BasePersonality):
def __init__(self, **data):
data['language_code'] = "id"
data['country_code'] = "Indonesia"
super().__init__(**data)
def get_system_prompt(self, scenario_context: str = "") -> str:
"""Generate a system prompt for Indonesian conversations."""
indonesian_instructions = """
- Use "gak" instead of "tidak", "udah" instead of "sudah", etc.
- Sound like real Indonesian street conversation
- Be casual and natural like real Indonesian people
- Use common Indonesian contractions and informal speech
"""
return super().get_system_prompt(scenario_context, indonesian_instructions)
# Convert existing models to use new structure
from pydantic import BaseModel
class Personality(BaseModel):
character_type: CharacterType
name: str
gender: Gender
tone: PersonalityTone
age_range: str
background: str
typical_phrases: List[str]
response_style: str
location_context: str
scenario_title: str
scenario_description: str
scenario_challenge: str
scenario_goal: str
goal_items: List[GoalItem]
helpful_phrases: List[HelpfulPhrase]
is_impatient: bool = False
is_helpful: bool = True
is_talkative: bool = True
uses_slang: bool = False
def get_system_prompt(self, scenario_context: str = "") -> str:
"""Generate a system prompt based on this personality."""
casualness_note = """
SPEAKING STYLE - BE VERY CASUAL AND NATURAL:
- Use everyday Indonesian like real people do
- Drop formal words when people actually don't use them
- Use contractions and casual speech patterns
- Speak like you're talking to a friend or regular customer
- Don't be overly polite or formal - be natural and relaxed
- Use "gak" instead of "tidak", "udah" instead of "sudah", etc.
- Sound like real Indonesian street conversation
"""
interaction_guide = ""
if self.character_type == CharacterType.VENDOR:
interaction_guide = """
INTERACTION FLOW:
- Greet Ask what they want Ask details (spice, egg, etc.) Ask for drink Give total Finish
- Remember what they've already ordered - don't repeat questions
"""
elif self.character_type == CharacterType.DRIVER:
interaction_guide = """
INTERACTION FLOW:
- Ask destination Negotiate price Mention traffic/conditions Agree on price Give ride instructions
- Focus on practical transport concerns
"""
elif self.character_type == CharacterType.CASHIER:
interaction_guide = """
INTERACTION FLOW:
- Greet Ask what they're buying → Scan items → Give total → Ask about bags → Complete transaction
- Keep it efficient and friendly
"""
else:
interaction_guide = """
INTERACTION FLOW:
- Respond naturally to customer needs
- Help them with whatever service you provide
- Keep conversation relevant to your role
"""
base_prompt = f"""You are {self.name}, a real {self.character_type.value.replace('_', ' ')} in Indonesia. You talk like a normal Indonesian person - casual, natural, and relaxed.
SCENARIO CONTEXT:
📍 {self.scenario_title}
🎯 What's happening: {self.scenario_description}
Challenge: {self.scenario_challenge}
🏆 Goal: {self.scenario_goal}
{casualness_note}
CHARACTER:
- {self.name} ({self.age_range} {self.character_type.value.replace('_', ' ')})
- {self.background}
- Works at: {self.location_context}
- Personality: {self.tone.value}, {'talkative' if self.is_talkative else 'quiet'}, {'helpful' if self.is_helpful else 'business-focused'}
YOUR TYPICAL PHRASES (use these naturally):
{chr(10).join(f'- {phrase}' for phrase in self.typical_phrases)}
CRITICAL RULES - READ CONVERSATION HISTORY CAREFULLY:
1. You are {self.name} - NOT a teacher, NOT formal, just a real person in this scenario
2. Speak casual Indonesian like in real life - very relaxed and natural
3. Keep responses SHORT (5-10 words max, like real conversation)
4. READ THE CONVERSATION HISTORY ABOVE - remember what was already asked and answered
5. NEVER repeat questions you already asked - check what was said before
6. TRACK the interaction progress - move naturally through the process based on what's been discussed
7. Use informal language: "gak" not "tidak", "udah" not "sudah", "gimana" not "bagaimana"
8. Stay relevant to your role and what customers need from you in this scenario
9. If customer already answered a question, move to the NEXT step in the process
10. Help the customer achieve their goal: {self.scenario_goal}
{interaction_guide}
ADDITIONAL CONTEXT: {scenario_context}
IMPORTANT: Look at the conversation history above before responding! Don't ask questions that were already answered. Continue naturally from where the conversation left off! Help them complete their goal in this scenario."""
return base_prompt
WARUNG_PERSONALITIES = {
"pak_budi": IndonesianPersonality(
character_type=CharacterType.VENDOR,
name="Pak Budi",
gender=Gender.MALE,
tone=PersonalityTone.CASUAL,
age_range="middle-aged",
background="Chill warung owner who knows his regular customers",
typical_phrases=[
"Mau apa?",
"Pedes gak?",
"Telur ditambahin?",
"Minum apa?",
"Tunggu ya",
"Udah jadi nih",
"Berapa ribu ya...",
"Makasih Bos"
],
response_style="Quick and casual, gets straight to the point",
location_context="Small warung near campus",
scenario_title="At a Warung",
scenario_description="You're at a local Indonesian warung (small restaurant) trying to order food and drinks. Practice ordering in Indonesian and navigating the casual dining experience.",
scenario_challenge="Understanding local food terminology, spice levels, and casual Indonesian conversation patterns. The owner speaks quickly and uses informal language.",
scenario_goal="Order nasi goreng pedas and teh manis",
goal_items=[
{"id": "order_nasi_goreng", "description": "Order nasi goreng pedas"},
{"id": "order_drink", "description": "Order teh manis"}
],
helpful_phrases=[
{"native": "Saya mau...", "english": "I want..."},
{"native": "Berapa harganya?", "english": "How much?"},
{"native": "Terima kasih", "english": "Thank you"},
{"native": "Pedas", "english": "Spicy"},
{"native": "Teh manis", "english": "Sweet tea"},
{"native": "Nasi goreng", "english": "Fried rice"}
],
is_helpful=True,
is_talkative=False,
uses_slang=True
),
"ibu_sari": IndonesianPersonality(
character_type=CharacterType.VENDOR,
name="Ibu Sari",
gender=Gender.FEMALE,
tone=PersonalityTone.CHEERFUL,
age_range="middle-aged",
background="Friendly warung owner who likes to chat with customers",
typical_phrases=[
"Eh, mau apa Dek?",
"Udah laper ya?",
"Pedes level berapa?",
"Es teh manis?",
"Sebentar ya Dek",
"Nih, masih panas",
"Hati-hati ya"
],
response_style="Friendly but not overly formal, treats customers warmly",
location_context="Busy warung in residential area",
scenario_title="At a Warung",
scenario_description="You're at a local Indonesian warung (small restaurant) trying to order food and drinks. Practice ordering in Indonesian and navigating the casual dining experience.",
scenario_challenge="Understanding local food terminology, spice levels, and casual Indonesian conversation patterns. The owner is chatty and may engage in small talk.",
scenario_goal="Order nasi goreng pedas and teh manis",
goal_items=[
{"id": "order_nasi_goreng", "description": "Order nasi goreng pedas"},
{"id": "order_drink", "description": "Order teh manis"}
],
helpful_phrases=[
{"native": "Saya mau...", "english": "I want..."},
{"native": "Berapa harganya?", "english": "How much?"},
{"native": "Terima kasih", "english": "Thank you"},
{"native": "Pedas", "english": "Spicy"},
{"native": "Teh manis", "english": "Sweet tea"},
{"native": "Nasi goreng", "english": "Fried rice"}
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
OJEK_PERSONALITIES = {
"mbak_sari": IndonesianPersonality(
character_type=CharacterType.DRIVER,
name="Mbak Sari",
gender=Gender.FEMALE,
tone=PersonalityTone.CASUAL,
age_range="young",
background="Smart ojek driver who knows how to negotiate",
typical_phrases=[
"Kemana Mas?",
"Wah macet nih",
"Bensin naik lagi",
"Udah deket kok",
"Pegang yang kuat",
"Sampai deh",
"Ati-ati ya",
"Jangan bilang-bilang"
],
response_style="Direct and business-minded, mentions practical concerns",
location_context="Busy street corner",
scenario_title="Taking an Ojek",
scenario_description="You need to get a motorcycle taxi (ojek) to take you to the mall. Practice negotiating destination and price in Indonesian.",
scenario_challenge="Learning transportation vocabulary, price negotiation, and understanding Jakarta traffic concerns. The driver may try to charge tourist prices.",
scenario_goal="Negotiate ride to mall and agree on price",
goal_items=[
{"id": "state_destination", "description": "Tell destination (mall)"},
{"id": "agree_price", "description": "Agree on price"}
],
helpful_phrases=[
{"native": "Ke mall berapa?", "english": "How much to the mall?"},
{"native": "Mahal banget!", "english": "That's too expensive!"},
{"native": "Lima belas ribu boleh?", "english": "Is 15 thousand OK?"},
{"native": "Ayo!", "english": "Let's go!"},
{"native": "Ke mall", "english": "To the mall"},
{"native": "Berapa ongkosnya?", "english": "How much is the fare?"}
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
CASHIER_PERSONALITIES = {
"adik_kasir": IndonesianPersonality(
character_type=CharacterType.CASHIER,
name="Adik Kasir",
gender=Gender.FEMALE,
tone=PersonalityTone.CASUAL,
age_range="young",
background="Young cashier who's chill and helpful",
typical_phrases=[
"Malam Kak",
"Beli apa?",
"Yang lain?",
"Pake kantong?",
"Total sekian",
"Kembaliannya",
"Makasih ya",
"Ati-ati"
],
response_style="Quick and efficient, gets the job done",
location_context="Alfamart convenience store",
scenario_title="At Alfamart",
scenario_description="You're shopping at Alfamart, a popular Indonesian convenience store chain. Practice buying everyday items and completing a transaction in Indonesian.",
scenario_challenge="Understanding convenience store vocabulary, payment interactions, and polite customer service language. Learn about Indonesian instant noodle brands and local products.",
scenario_goal="Buy Indomie and mineral water",
goal_items=[
{"id": "buy_indomie", "description": "Buy Indomie"},
{"id": "buy_water", "description": "Buy mineral water"}
],
helpful_phrases=[
{"native": "Saya mau beli...", "english": "I want to buy..."},
{"native": "Berapa totalnya?", "english": "How much is the total?"},
{"native": "Pake kantong", "english": "With a bag"},
{"native": "Bayar cash", "english": "Pay with cash"},
{"native": "Indomie", "english": "Indomie (instant noodles)"},
{"native": "Air mineral", "english": "Mineral water"}
],
is_helpful=True,
is_talkative=False,
uses_slang=True
)
}
COFFEE_SHOP_PERSONALITIES = {
"tetangga_ali": IndonesianPersonality(
character_type=CharacterType.GENERIC,
name="Tetangga Ali",
gender=Gender.MALE,
tone=PersonalityTone.CHATTY,
age_range="middle-aged",
background="Friendly neighborhood guy who loves chatting with everyone about everything",
typical_phrases=[
"Eh, apa kabar?",
"Lagi ngapain nih?",
"Cuacanya panas banget ya hari ini",
"Udah makan belum?",
"Gimana kabar keluarga?",
"Kerja dimana sekarang?",
"Udah lama gak ketemu",
"Wah, sibuk banget ya",
"Ngomong-ngomong...",
"Oh iya, tau gak...",
"Kemarin aku ke...",
"Eh, kamu pernah ke...?"
],
response_style="Very talkative, asks lots of questions, shares stories, makes connections to random topics",
location_context="Local coffee shop in residential area",
scenario_title="Coffee Shop Small Talk",
scenario_description="You're at a local coffee shop and meet a very friendly neighbor who loves to chat. Practice making small talk in Indonesian - discussing weather, family, work, hobbies, and daily life.",
scenario_challenge="Learn natural small talk patterns, question-asking, and how to keep conversations flowing in Indonesian. Practice responding to personal questions and sharing about yourself.",
scenario_goal="Have a natural small talk conversation covering at least 3 different topics",
goal_items=[
{"id": "greet_and_respond", "description": "Exchange greetings and ask how each other is doing"},
{"id": "discuss_weather_daily_life", "description": "Talk about weather, daily activities, or current situation"},
{"id": "share_personal_info", "description": "Share something about yourself (work, family, hobbies, etc.)"},
{"id": "ask_followup_questions", "description": "Ask follow-up questions to keep the conversation going"}
],
helpful_phrases=[
{"native": "Apa kabar?", "english": "How are you?"},
{"native": "Baik-baik aja", "english": "I'm doing fine"},
{"native": "Lagi ngapain?", "english": "What are you up to?"},
{"native": "Cuacanya panas ya", "english": "The weather is hot, isn't it?"},
{"native": "Udah makan belum?", "english": "Have you eaten yet?"},
{"native": "Gimana kabar keluarga?", "english": "How's the family?"},
{"native": "Kerja dimana?", "english": "Where do you work?"},
{"native": "Ngomong-ngomong...", "english": "By the way..."},
{"native": "Oh iya...", "english": "Oh yes..."},
{"native": "Wah, menarik!", "english": "Wow, interesting!"},
{"native": "Bener juga ya", "english": "That's true"},
{"native": "Udah lama gak ketemu", "english": "Haven't seen you in a while"}
],
is_helpful=True,
is_talkative=True,
is_impatient=False,
uses_slang=True
)
}
SCENARIO_PERSONALITIES = {
"warung": WARUNG_PERSONALITIES,
"ojek": OJEK_PERSONALITIES,
"alfamart": CASHIER_PERSONALITIES,
"coffee_shop": COFFEE_SHOP_PERSONALITIES
}

View File

@ -0,0 +1,136 @@
import logging
from typing import Dict, Any
from google.cloud import texttospeech
from core.speech_service import TextToSpeechService, BaseAIConversationService, BaseConversationFlowService
from .models import SCENARIO_PERSONALITIES, Personality
logger = logging.getLogger(__name__)
class IndonesianTextToSpeechService(TextToSpeechService):
def __init__(self):
super().__init__(language_code="id-ID")
def _get_voice_config(self, gender: str, character_name: str = None) -> Dict[str, Any]:
"""Get Indonesian-specific voice configuration."""
tts_gender = self.gender_map.get(gender, texttospeech.SsmlVoiceGender.FEMALE)
character_voice_map = {
"Pak Budi": {
"name": "id-ID-Chirp3-HD-Charon",
"speaking_rate": 0.95,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
"Ibu Sari": {
"name": "id-ID-Chirp3-HD-Kore",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Mbak Sari": {
"name": "id-ID-Chirp3-HD-Zephyr",
"speaking_rate": 1.1,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Adik Kasir": {
"name": "id-ID-Chirp3-HD-Aoede",
"speaking_rate": 1.05,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Tetangga Ali": {
"name": "id-ID-Chirp3-HD-Puck",
"speaking_rate": 1.05,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
}
}
gender_voice_fallback = {
texttospeech.SsmlVoiceGender.MALE: {
"name": "id-ID-Chirp3-HD-Fenrir",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
texttospeech.SsmlVoiceGender.FEMALE: {
"name": "id-ID-Chirp3-HD-Leda",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
}
}
if character_name and character_name in character_voice_map:
config_set = character_voice_map[character_name]
logger.info(f"Using character-specific voice for '{character_name}': {config_set['name']}")
return config_set
config_set = gender_voice_fallback.get(tts_gender, gender_voice_fallback[texttospeech.SsmlVoiceGender.FEMALE])
logger.info(f"Using gender fallback voice for {tts_gender}: {config_set['name']}")
return config_set
class IndonesianAIConversationService(BaseAIConversationService):
def __init__(self):
super().__init__(language_code="id")
def get_personality_for_scenario(self, scenario: str, character_name: str = None) -> Personality:
"""Get Indonesian personality based on scenario and character name."""
if scenario in SCENARIO_PERSONALITIES:
personalities = SCENARIO_PERSONALITIES[scenario]
if character_name and character_name in personalities:
return personalities[character_name]
else:
return list(personalities.values())[0]
# Return default personality if scenario not found
from .models import Personality, CharacterType, Gender, PersonalityTone, GoalItem, HelpfulPhrase
return Personality(
character_type=CharacterType.GENERIC,
name="Pak/Bu",
gender=Gender.FEMALE,
tone=PersonalityTone.FRIENDLY,
age_range="middle-aged",
background="Helpful Indonesian person",
typical_phrases=["Halo!", "Apa kabar?", "Bisa saya bantu?"],
response_style="Friendly and helpful",
location_context="Indonesia",
scenario_title="General Conversation",
scenario_description="General Indonesian conversation practice",
scenario_challenge="Practice basic Indonesian conversation",
scenario_goal="Have a natural conversation",
goal_items=[],
helpful_phrases=[],
language_code="id",
country_code="Indonesia",
is_helpful=True,
is_talkative=True
)
class IndonesianConversationFlowService(BaseConversationFlowService):
def __init__(self):
super().__init__(language_code="id-ID")
self.tts_service = IndonesianTextToSpeechService()
self.ai_service = IndonesianAIConversationService()
def extract_scenario_from_context(self, context: str) -> str:
"""Extract scenario type from context string."""
logger.info(f"Extracting scenario from context: '{context}'")
context_lower = context.lower()
detected_scenario = None
if "coffee_shop" in context_lower or "coffee" in context_lower:
detected_scenario = "coffee_shop"
elif "warung" in context_lower or "nasi goreng" in context_lower:
detected_scenario = "warung"
elif "ojek" in context_lower or "mall" in context_lower:
detected_scenario = "ojek"
elif "alfamart" in context_lower or "indomie" in context_lower:
detected_scenario = "alfamart"
else:
detected_scenario = "warung" # Default to warung
logger.info(f"Detected scenario: '{detected_scenario}'")
return detected_scenario

818
backend/main.py Normal file
View File

@ -0,0 +1,818 @@
import difflib
import re
import json
import base64
import logging
import time
from typing import Dict, Any, List
from fastapi import FastAPI, HTTPException, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from google.cloud import speech
import openai
from languages.indonesian.services import IndonesianConversationFlowService
from languages.german.services import GermanConversationFlowService
from config import config
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
app = FastAPI()
config.validate()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Temporarily allow all origins for debugging
allow_credentials=False, # Set to False when using allow_origins=["*"]
allow_methods=["*"],
allow_headers=["*"],
)
# Language-specific services
language_services = {
"indonesian": IndonesianConversationFlowService(),
"german": GermanConversationFlowService()
}
class ResponseCheck(BaseModel):
user_response: str
expected_response: str
scenario: str
class ResponseResult(BaseModel):
is_correct: bool
feedback: str
similarity: float
class TranslationRequest(BaseModel):
text: str
source_language: str
target_language: str
class TranslationResult(BaseModel):
translation: str
source_text: str
class SuggestionRequest(BaseModel):
language: str
scenario: str
conversation_history: List[Dict[str, str]]
class SuggestionResponse(BaseModel):
intro: str
suggestions: List[Dict[str, str]]
class ConversationFeedbackRequest(BaseModel):
language: str
scenario: str
conversation_history: List[Dict[str, str]]
class ConversationFeedbackResponse(BaseModel):
encouragement: str
suggestions: List[Dict[str, str]]
examples: List[Dict[str, str]]
def normalize_text(text: str) -> str:
text = text.lower().strip()
text = re.sub(r"[^\w\s]", "", text)
text = re.sub(r"\s+", " ", text)
return text
def calculate_similarity(text1: str, text2: str) -> float:
normalized1 = normalize_text(text1)
normalized2 = normalize_text(text2)
return difflib.SequenceMatcher(None, normalized1, normalized2).ratio()
def generate_feedback(
user_response: str, expected_response: str, similarity: float, scenario: str
) -> str:
if similarity >= 0.9:
return "Perfect! Excellent Indonesian!"
elif similarity >= 0.7:
return "Great job! That's correct!"
elif similarity >= 0.5:
return f"Good attempt! Try: '{expected_response}'"
elif similarity >= 0.3:
return f"Close, but try again. Expected: '{expected_response}'"
else:
return f"Not quite right. The correct answer is: '{expected_response}'"
@app.post("/api/check-response", response_model=ResponseResult)
async def check_response(request: ResponseCheck) -> ResponseResult:
"""Check user response against expected response."""
try:
similarity = calculate_similarity(request.user_response, request.expected_response)
is_correct = similarity >= 0.7
feedback = generate_feedback(
request.user_response,
request.expected_response,
similarity,
request.scenario,
)
return ResponseResult(
is_correct=is_correct,
feedback=feedback,
similarity=similarity,
)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e)) from e
@app.get("/api/scenarios/{language}")
async def get_scenarios(language: str) -> dict:
"""Get scenarios for a specific language (indonesian or german)"""
if language == "indonesian":
from languages.indonesian.models import SCENARIO_PERSONALITIES
native_key = "indonesian"
elif language == "german":
from languages.german.models import SCENARIO_PERSONALITIES
native_key = "native"
else:
raise HTTPException(status_code=400, detail="Unsupported language")
scenarios = {}
for scenario_id, personalities in SCENARIO_PERSONALITIES.items():
default_personality = list(personalities.values())[0]
scenarios[scenario_id] = {
"id": scenario_id,
"title": default_personality.scenario_title,
"description": default_personality.scenario_description,
"challenge": default_personality.scenario_challenge,
"goal": default_personality.scenario_goal,
"character": default_personality.name,
"character_background": default_personality.background,
"character_gender": default_personality.gender.value,
"location": default_personality.location_context,
"language": language,
"goal_items": [
{
"id": item.id,
"description": item.description,
"completed": False
} for item in default_personality.goal_items
],
"helpful_phrases": [
{
native_key: phrase.native if hasattr(phrase, 'native') else phrase.indonesian,
"english": phrase.english
} for phrase in default_personality.helpful_phrases
],
"available_characters": [
{
"id": char_id,
"name": char.name,
"background": char.background,
"tone": char.tone.value,
"gender": char.gender.value
} for char_id, char in personalities.items()
]
}
return scenarios
@app.get("/api/scenarios")
async def get_all_scenarios() -> dict:
"""Get all available scenarios for all languages"""
all_scenarios = {}
# Get Indonesian scenarios
indonesian_scenarios = await get_scenarios("indonesian")
all_scenarios["indonesian"] = indonesian_scenarios
# Get German scenarios
german_scenarios = await get_scenarios("german")
all_scenarios["german"] = german_scenarios
return all_scenarios
@app.post("/api/suggestions", response_model=SuggestionResponse)
async def generate_suggestions(request: SuggestionRequest) -> SuggestionResponse:
"""Generate contextual language suggestions based on conversation history."""
logger.info(f"Received suggestions request: language={request.language}, scenario={request.scenario}")
try:
client = openai.OpenAI(api_key=config.OPENAI_API_KEY)
# Get recent conversation context
conversation_context = ""
for i, msg in enumerate(request.conversation_history[-4:]):
conversation_context += f"{msg['type'].capitalize()}: {msg['text']}\n"
# Determine target language and context
if request.language == "german":
target_language = "German"
native_language = "English"
scenario_prompt = f"in a {request.scenario} scenario in Germany"
else:
target_language = "Indonesian"
native_language = "English"
scenario_prompt = f"in a {request.scenario} scenario in Indonesia"
suggestion_prompt = f"""You are a helpful language learning assistant. Based on the conversation history below, suggest 3 useful phrases the user might want to say next in {target_language}.
Conversation context {scenario_prompt}:
{conversation_context}
Provide suggestions as a JSON object with:
- "intro": A brief encouraging message about what they might want to say next
- "suggestions": Array of 3 objects, each with:
- "{target_language.lower()}_text": The phrase in {target_language}
- "english_meaning": The English translation/meaning
Make the suggestions contextual, natural, and progressively helpful for the conversation. Focus on practical phrases they might actually need.
Example format:
{{
"intro": "Here are some phrases you might find useful:",
"suggestions": [
{{
"{target_language.lower()}_text": "Example phrase",
"english_meaning": "English translation"
}}
]
}}"""
response = client.chat.completions.create(
model=config.OPENAI_MODEL,
messages=[
{"role": "system", "content": f"You are a helpful {target_language} language learning assistant. Always respond with valid JSON."},
{"role": "user", "content": suggestion_prompt}
],
max_tokens=500,
temperature=0.7
)
suggestion_json = response.choices[0].message.content.strip()
logger.info(f"AI suggestion response: {suggestion_json}")
# Parse JSON response
import json
try:
# Clean up the JSON response to handle potential formatting issues
cleaned_json = suggestion_json.strip()
if cleaned_json.startswith('```json'):
cleaned_json = cleaned_json[7:-3].strip()
elif cleaned_json.startswith('```'):
cleaned_json = cleaned_json[3:-3].strip()
suggestion_data = json.loads(cleaned_json)
return SuggestionResponse(
intro=suggestion_data.get("intro", "Here are some helpful phrases:"),
suggestions=suggestion_data.get("suggestions", [])
)
except json.JSONDecodeError as e:
logger.error(f"JSON decode error: {str(e)} for content: {cleaned_json}")
# Fallback if JSON parsing fails
text_key = f"{target_language.lower()}_text"
fallback_suggestions = [
{
text_key: "Excuse me, can you help me?",
"english_meaning": "A polite way to ask for assistance"
},
{
text_key: "Thank you very much",
"english_meaning": "Express gratitude"
},
{
text_key: "I don't understand",
"english_meaning": "When you need clarification"
}
]
return SuggestionResponse(
intro="Here are some helpful phrases:",
suggestions=fallback_suggestions
)
except Exception as e:
logger.error(f"Suggestion generation error: {str(e)}")
# Return fallback suggestions instead of raising an error
return SuggestionResponse(
intro="Here are some helpful phrases:",
suggestions=[
{
"german_text" if request.language == "german" else "indonesian_text": "Hello",
"english_meaning": "A basic greeting"
},
{
"german_text" if request.language == "german" else "indonesian_text": "Thank you",
"english_meaning": "Express gratitude"
},
{
"german_text" if request.language == "german" else "indonesian_text": "Please",
"english_meaning": "Polite request"
}
]
)
@app.post("/api/translate", response_model=TranslationResult)
async def translate_text(request: TranslationRequest) -> TranslationResult:
try:
client = openai.OpenAI(api_key=config.OPENAI_API_KEY)
translation_prompt = f"""Translate the following Indonesian text to natural, conversational English.
Keep the tone and style appropriate for casual conversation.
Indonesian text: "{request.text}"
Provide only the English translation, nothing else."""
response = client.chat.completions.create(
model=config.OPENAI_MODEL,
messages=[
{"role": "system", "content": "You are a professional Indonesian to English translator. Provide natural, conversational translations."},
{"role": "user", "content": translation_prompt}
],
max_tokens=200,
temperature=0.3
)
translation = response.choices[0].message.content.strip()
return TranslationResult(
translation=translation,
source_text=request.text
)
except Exception as e:
logger.error(f"Translation error: {str(e)}")
raise HTTPException(status_code=500, detail=f"Translation failed: {str(e)}")
@app.post("/api/conversation-feedback", response_model=ConversationFeedbackResponse)
async def generate_conversation_feedback(request: ConversationFeedbackRequest) -> ConversationFeedbackResponse:
"""Generate encouraging feedback and suggestions for completed conversation."""
logger.info(f"Received feedback request: language={request.language}, scenario={request.scenario}")
try:
client = openai.OpenAI(api_key=config.OPENAI_API_KEY)
# Build conversation history
conversation_context = ""
user_messages = []
for msg in request.conversation_history:
if msg.get('type') == 'user':
user_messages.append(msg['text'])
conversation_context += f"{msg.get('type', 'unknown').capitalize()}: {msg.get('text', '')}\n"
# Determine target language and feedback context
if request.language == "german":
target_language = "German"
language_specific_feedback = """
Focus on common German language learning areas:
- Article usage (der, die, das)
- Verb conjugation and word order
- Formal vs informal language (Sie vs du)
- Separable verbs
- Common German expressions and idioms
"""
else:
target_language = "Indonesian"
language_specific_feedback = """
Focus on common Indonesian language learning areas:
- Formal vs informal language (using proper pronouns)
- Sentence structure and word order
- Common Indonesian expressions
- Politeness levels and cultural context
"""
feedback_prompt = f"""You are an encouraging {target_language} language teacher. A student has just finished a conversation practice session in a {request.scenario} scenario.
Here's their conversation:
{conversation_context}
{language_specific_feedback}
Provide helpful, encouraging feedback as a JSON object with:
- "encouragement": A positive, motivating message about their effort (2-3 sentences)
- "suggestions": Array of 2-3 objects with:
- "category": Area of improvement (e.g., "Pronunciation", "Grammar", "Vocabulary")
- "tip": Specific, actionable advice
- "examples": Array of 1-2 objects with:
- "original": Something they actually said (from the conversation)
- "improved": A better way to say it
- "reason": Brief explanation of why it's better
Make it encouraging and supportive, focusing on growth rather than criticism. If they did well, focus on areas to sound more natural or confident.
Example format:
{{
"encouragement": "You did a great job engaging in this conversation! Your effort to communicate is really paying off.",
"suggestions": [
{{
"category": "Vocabulary",
"tip": "Try using more common everyday words to sound more natural"
}}
],
"examples": [
{{
"original": "I want to purchase this item",
"improved": "I'd like to buy this",
"reason": "Sounds more natural and conversational"
}}
]
}}"""
response = client.chat.completions.create(
model=config.OPENAI_MODEL,
messages=[
{"role": "system", "content": f"You are an encouraging {target_language} language teacher. Always respond with valid JSON and be supportive."},
{"role": "user", "content": feedback_prompt}
],
max_tokens=600,
temperature=0.7
)
feedback_json = response.choices[0].message.content.strip()
logger.info(f"AI feedback response: {feedback_json}")
# Parse JSON response
try:
# Clean up the JSON response
cleaned_json = feedback_json.strip()
if cleaned_json.startswith('```json'):
cleaned_json = cleaned_json[7:-3].strip()
elif cleaned_json.startswith('```'):
cleaned_json = cleaned_json[3:-3].strip()
feedback_data = json.loads(cleaned_json)
return ConversationFeedbackResponse(
encouragement=feedback_data.get("encouragement", "Great job practicing! Every conversation helps you improve."),
suggestions=feedback_data.get("suggestions", []),
examples=feedback_data.get("examples", [])
)
except json.JSONDecodeError as e:
logger.error(f"JSON decode error: {str(e)} for content: {cleaned_json}")
# Fallback response
return ConversationFeedbackResponse(
encouragement="Great job practicing! Every conversation helps you improve.",
suggestions=[
{
"category": "Practice",
"tip": "Keep practicing regular conversations to build confidence"
}
],
examples=[]
)
except Exception as e:
logger.error(f"Feedback generation error: {str(e)}")
# Return encouraging fallback
return ConversationFeedbackResponse(
encouragement="Great job practicing! Every conversation helps you improve.",
suggestions=[
{
"category": "Practice",
"tip": "Keep practicing regular conversations to build confidence"
}
],
examples=[]
)
@app.get("/api/health")
async def health_check() -> dict:
return {"status": "healthy"}
session_services: Dict[str, Any] = {}
@app.websocket("/ws/speech/{language}")
async def websocket_speech_endpoint(websocket: WebSocket, language: str):
await websocket.accept()
logger.info(f"WebSocket client connected for language: {language}")
# Validate language
if language not in language_services:
await websocket.close(code=1008, reason="Unsupported language")
return
audio_buffer = bytearray()
min_audio_length = 48000
is_recording = False
chunk_count = 0
latest_transcript = ""
recording_start_time = None
max_recording_duration = 60 # 60 seconds max (increased to give more time after suggestions)
transcript_repeat_count = 0
last_transcript = ""
high_confidence_count = 0
import uuid
session_id = str(uuid.uuid4())
session_conversation_service = language_services[language].__class__() # Create new instance
session_services[session_id] = session_conversation_service
try:
while True:
data = await websocket.receive_text()
message = json.loads(data)
logger.info(f"Received message type: {message['type']}")
if message["type"] == "audio_start":
is_recording = True
audio_buffer.clear()
chunk_count = 0
latest_transcript = ""
recording_start_time = time.time()
logger.info("Started recording session")
elif message["type"] == "conversation_reset":
session_conversation_service.ai_service.reset_conversation()
logger.info("Conversation history reset")
elif message["type"] == "audio_chunk":
if is_recording:
# Check for recording timeout
if recording_start_time and time.time() - recording_start_time > max_recording_duration:
logger.warning("Recording timeout reached, auto-stopping")
# Send timeout notification to frontend
timeout_notification = {
"type": "recording_timeout",
"message": "Recording stopped due to timeout"
}
await websocket.send_text(json.dumps(timeout_notification))
# Force audio_end processing
message = {"type": "audio_end", "scenario_context": message.get("scenario_context", "")}
# Don't return, let it fall through to audio_end processing
else:
audio_data = base64.b64decode(message["audio"])
logger.info(f"Received audio chunk: {len(audio_data)} bytes")
audio_buffer.extend(audio_data)
logger.info(f"Audio buffer size: {len(audio_buffer)} bytes")
# Process chunk for real-time transcription
chunk_count += 1
try:
# Only process every 8th chunk to reduce log spam and API calls
if chunk_count % 8 == 0 and len(audio_buffer) >= 19200: # ~0.4 seconds of audio at 48kHz
recognition_audio = speech.RecognitionAudio(content=bytes(audio_buffer))
response = session_conversation_service.stt_service.client.recognize(
config=session_conversation_service.stt_service.recognition_config,
audio=recognition_audio
)
if response.results:
transcript = response.results[0].alternatives[0].transcript
confidence = response.results[0].alternatives[0].confidence
# Store transcript if confidence is reasonable (lowered for speed)
if confidence > 0.6:
latest_transcript = transcript # Store latest transcript
# Check for repeated high-confidence transcripts
if confidence > 0.9:
if transcript == last_transcript:
high_confidence_count += 1
logger.info(f"Repeated high confidence transcript #{high_confidence_count}: '{transcript}' (confidence: {confidence})")
# If we've seen the same high-confidence transcript 4+ times, auto-stop
if high_confidence_count >= 4:
logger.info("Auto-stopping recording due to repeated high-confidence transcript")
is_recording = False
# Send final processing message
final_message = {"type": "audio_end", "scenario_context": message.get("scenario_context", "")}
# Process immediately without waiting for more chunks
await websocket.send_text(json.dumps({
"type": "transcription",
"transcript": transcript,
"is_final": True,
"confidence": confidence
}))
# Process AI response
logger.info("Getting AI response...")
ai_response = await session_conversation_service.process_conversation_flow_fast(
transcript,
message.get("scenario_context", "")
)
logger.info(f"AI response: {ai_response.get('text', 'No text')}")
await websocket.send_text(json.dumps(ai_response))
audio_buffer.clear()
logger.info("Recording session ended due to repeated transcript")
continue # Continue to next message
else:
high_confidence_count = 1
last_transcript = transcript
logger.info(f"High confidence transcript ready: '{transcript}' (confidence: {confidence})")
else:
high_confidence_count = 0
last_transcript = ""
transcription_result = {
"type": "transcription",
"transcript": transcript,
"is_final": False,
"confidence": confidence
}
await websocket.send_text(json.dumps(transcription_result))
# Only log interim transcriptions occasionally to reduce spam
if chunk_count % 16 == 0:
logger.info(f"Interim transcription: '{transcript}' (confidence: {confidence})")
else:
transcription_result = {
"type": "transcription",
"transcript": "Listening...",
"is_final": False,
"confidence": 0.0
}
await websocket.send_text(json.dumps(transcription_result))
except Exception as e:
# Only log transcription errors occasionally to reduce spam
if chunk_count % 16 == 0:
logger.error(f"Real-time transcription error: {str(e)}")
transcription_result = {
"type": "transcription",
"transcript": "Listening...",
"is_final": False,
"confidence": 0.0
}
await websocket.send_text(json.dumps(transcription_result))
else:
# Reduce logging for non-recording chunks
if chunk_count % 32 == 0:
logger.info("Received audio chunk but not in recording mode")
elif message["type"] == "audio_end":
is_recording = False
final_transcript = ""
# Use latest interim transcript if available for faster response
logger.info(f"Checking latest_transcript: '{latest_transcript}'")
if latest_transcript.strip():
final_transcript = latest_transcript
logger.info(f"Using latest interim transcript: '{final_transcript}'")
# Send final transcription immediately
transcription_result = {
"type": "transcription",
"transcript": final_transcript,
"is_final": True,
"confidence": 0.8 # Reasonable confidence for interim result
}
await websocket.send_text(json.dumps(transcription_result))
# Process AI response with faster flow
logger.info("Getting AI response...")
ai_response = await session_conversation_service.process_conversation_flow_fast(
final_transcript,
message.get("scenario_context", "")
)
logger.info(f"AI response: {ai_response.get('text', 'No text')}")
await websocket.send_text(json.dumps(ai_response))
# Clear buffer
audio_buffer.clear()
logger.info("Recording session ended, ready for next session")
elif len(audio_buffer) > 0:
# Fallback to full transcription if no interim results
logger.info(f"Processing final audio buffer: {len(audio_buffer)} bytes")
try:
recognition_audio = speech.RecognitionAudio(content=bytes(audio_buffer))
response = session_conversation_service.stt_service.client.recognize(
config=session_conversation_service.stt_service.recognition_config,
audio=recognition_audio
)
if response.results:
transcript = response.results[0].alternatives[0].transcript
confidence = response.results[0].alternatives[0].confidence
logger.info(f"Final transcription: '{transcript}' (confidence: {confidence})")
transcription_result = {
"type": "transcription",
"transcript": transcript,
"is_final": True,
"confidence": confidence
}
await websocket.send_text(json.dumps(transcription_result))
logger.info("Getting AI response...")
ai_response = await session_conversation_service.process_conversation_flow(
transcript,
message.get("scenario_context", "")
)
logger.info(f"AI response: {ai_response.get('text', 'No text')}")
await websocket.send_text(json.dumps(ai_response))
else:
logger.info("No transcription results from Google Speech")
# Send empty final transcription so UI knows recording ended
transcription_result = {
"type": "transcription",
"transcript": "",
"is_final": True,
"confidence": 0.0
}
await websocket.send_text(json.dumps(transcription_result))
audio_buffer.clear()
logger.info("Recording session ended, ready for next session")
except Exception as e:
logger.error(f"Final speech recognition error: {str(e)}")
# Send empty final transcription so UI knows recording ended
transcription_result = {
"type": "transcription",
"transcript": "",
"is_final": True,
"confidence": 0.0
}
await websocket.send_text(json.dumps(transcription_result))
error_result = {
"type": "error",
"message": f"Speech recognition error: {str(e)}"
}
await websocket.send_text(json.dumps(error_result))
audio_buffer.clear()
else:
logger.info("No audio data to process")
# Send empty final transcription so UI knows recording ended
transcription_result = {
"type": "transcription",
"transcript": "",
"is_final": True,
"confidence": 0.0
}
await websocket.send_text(json.dumps(transcription_result))
elif message["type"] == "text_message":
logger.info(f"Processing text message: '{message['text']}'")
ai_response = await session_conversation_service.process_conversation_flow(
message["text"],
message.get("scenario_context", "")
)
logger.info(f"AI response: {ai_response.get('text', 'No text')}")
await websocket.send_text(json.dumps(ai_response))
elif message["type"] == "initial_greeting":
logger.info("Processing initial greeting request")
ai_response = await session_conversation_service.generate_initial_greeting(
message.get("scenario_context", "")
)
logger.info(f"Initial greeting: {ai_response.get('text', 'No text')}")
await websocket.send_text(json.dumps(ai_response))
except WebSocketDisconnect:
logger.info("WebSocket client disconnected")
except Exception as e:
logger.error(f"WebSocket error: {str(e)}")
error_message = {
"type": "error",
"message": f"WebSocket error: {str(e)}"
}
await websocket.send_text(json.dumps(error_message))
@app.websocket("/ws/tts")
async def websocket_tts_endpoint(websocket: WebSocket):
"""WebSocket endpoint for text-to-speech streaming."""
await websocket.accept()
try:
while True:
data = await websocket.receive_text()
message = json.loads(data)
if message["type"] == "synthesize":
try:
# Use the default TTS service for this endpoint
tts_service = language_services["indonesian"].tts_service
audio_content = await tts_service.synthesize_speech(message["text"])
audio_base64 = base64.b64encode(audio_content).decode('utf-8')
response = {
"type": "audio",
"audio": audio_base64,
"format": "mp3"
}
await websocket.send_text(json.dumps(response))
except Exception as e:
error_response = {
"type": "error",
"message": f"TTS error: {str(e)}"
}
await websocket.send_text(json.dumps(error_response))
except WebSocketDisconnect:
print("TTS client disconnected")
except Exception as e:
error_message = {
"type": "error",
"message": f"TTS WebSocket error: {str(e)}"
}
await websocket.send_text(json.dumps(error_message))
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host=config.HOST, port=config.PORT, log_level="debug" if config.DEBUG else "info")

398
backend/models.py Normal file
View File

@ -0,0 +1,398 @@
from pydantic import BaseModel
from typing import List, Optional, Dict
from enum import Enum
class HelpfulPhrase(BaseModel):
indonesian: str
english: str
class CharacterType(str, Enum):
WARUNG_OWNER = "warung_owner"
OJEK_DRIVER = "ojek_driver"
CASHIER = "cashier"
HOTEL_RECEPTIONIST = "hotel_receptionist"
MARKET_VENDOR = "market_vendor"
GENERIC = "generic"
class PersonalityTone(str, Enum):
FRIENDLY = "friendly"
CASUAL = "casual"
FORMAL = "formal"
CHEERFUL = "cheerful"
BUSINESS_LIKE = "business_like"
SLEEPY = "sleepy"
CHATTY = "chatty"
class Gender(str, Enum):
MALE = "male"
FEMALE = "female"
class GoalItem(BaseModel):
id: str
description: str
keywords: List[str] = []
completed: bool = False
class Personality(BaseModel):
character_type: CharacterType
name: str
gender: Gender
tone: PersonalityTone
age_range: str
background: str
typical_phrases: List[str]
response_style: str
location_context: str
scenario_title: str
scenario_description: str
scenario_challenge: str
scenario_goal: str
goal_items: List[GoalItem]
helpful_phrases: List[HelpfulPhrase]
is_impatient: bool = False
is_helpful: bool = True
is_talkative: bool = True
uses_slang: bool = False
def get_system_prompt(self, scenario_context: str = "") -> str:
"""Generate a system prompt based on this personality."""
casualness_note = """
SPEAKING STYLE - BE VERY CASUAL AND NATURAL:
- Use everyday Indonesian like real people do
- Drop formal words when people actually don't use them
- Use contractions and casual speech patterns
- Speak like you're talking to a friend or regular customer
- Don't be overly polite or formal - be natural and relaxed
- Use "gak" instead of "tidak", "udah" instead of "sudah", etc.
- Sound like real Indonesian street conversation
"""
interaction_guide = ""
if self.character_type == CharacterType.WARUNG_OWNER:
interaction_guide = """
INTERACTION FLOW:
- Greet Ask what they want Ask details (spice, egg, etc.) Ask for drink Give total Finish
- Remember what they've already ordered - don't repeat questions
"""
elif self.character_type == CharacterType.OJEK_DRIVER:
interaction_guide = """
INTERACTION FLOW:
- Ask destination Negotiate price Mention traffic/conditions Agree on price Give ride instructions
- Focus on practical transport concerns
"""
elif self.character_type == CharacterType.CASHIER:
interaction_guide = """
INTERACTION FLOW:
- Greet Ask what they're buying → Scan items → Give total → Ask about bags → Complete transaction
- Keep it efficient and friendly
"""
else:
interaction_guide = """
INTERACTION FLOW:
- Respond naturally to customer needs
- Help them with whatever service you provide
- Keep conversation relevant to your role
"""
base_prompt = f"""You are {self.name}, a real {self.character_type.value.replace('_', ' ')} in Indonesia. You talk like a normal Indonesian person - casual, natural, and relaxed.
SCENARIO CONTEXT:
📍 {self.scenario_title}
🎯 What's happening: {self.scenario_description}
Challenge: {self.scenario_challenge}
🏆 Goal: {self.scenario_goal}
{casualness_note}
CHARACTER:
- {self.name} ({self.age_range} {self.character_type.value.replace('_', ' ')})
- {self.background}
- Works at: {self.location_context}
- Personality: {self.tone.value}, {'talkative' if self.is_talkative else 'quiet'}, {'helpful' if self.is_helpful else 'business-focused'}
YOUR TYPICAL PHRASES (use these naturally):
{chr(10).join(f'- {phrase}' for phrase in self.typical_phrases)}
CRITICAL RULES - READ CONVERSATION HISTORY CAREFULLY:
1. You are {self.name} - NOT a teacher, NOT formal, just a real person in this scenario
2. Speak casual Indonesian like in real life - very relaxed and natural
3. Keep responses SHORT (5-10 words max, like real conversation)
4. READ THE CONVERSATION HISTORY ABOVE - remember what was already asked and answered
5. NEVER repeat questions you already asked - check what was said before
6. TRACK the interaction progress - move naturally through the process based on what's been discussed
7. Use informal language: "gak" not "tidak", "udah" not "sudah", "gimana" not "bagaimana"
8. Stay relevant to your role and what customers need from you in this scenario
9. If customer already answered a question, move to the NEXT step in the process
10. Help the customer achieve their goal: {self.scenario_goal}
{interaction_guide}
ADDITIONAL CONTEXT: {scenario_context}
IMPORTANT: Look at the conversation history above before responding! Don't ask questions that were already answered. Continue naturally from where the conversation left off! Help them complete their goal in this scenario."""
return base_prompt
WARUNG_PERSONALITIES = {
"pak_budi": Personality(
character_type=CharacterType.WARUNG_OWNER,
name="Pak Budi",
gender=Gender.MALE,
tone=PersonalityTone.CASUAL,
age_range="middle-aged",
background="Chill warung owner who knows his regular customers",
typical_phrases=[
"Mau apa?",
"Pedes gak?",
"Telur ditambahin?",
"Minum apa?",
"Tunggu ya",
"Udah jadi nih",
"Berapa ribu ya...",
"Makasih Bos"
],
response_style="Quick and casual, gets straight to the point",
location_context="Small warung near campus",
scenario_title="At a Warung",
scenario_description="You're at a local Indonesian warung (small restaurant) trying to order food and drinks. Practice ordering in Indonesian and navigating the casual dining experience.",
scenario_challenge="Understanding local food terminology, spice levels, and casual Indonesian conversation patterns. The owner speaks quickly and uses informal language.",
scenario_goal="Order nasi goreng pedas and teh manis",
goal_items=[
GoalItem(
id="order_nasi_goreng",
description="Order nasi goreng pedas"
),
GoalItem(
id="order_drink",
description="Order teh manis"
)
],
helpful_phrases=[
HelpfulPhrase(indonesian="Saya mau...", english="I want..."),
HelpfulPhrase(indonesian="Berapa harganya?", english="How much?"),
HelpfulPhrase(indonesian="Terima kasih", english="Thank you"),
HelpfulPhrase(indonesian="Pedas", english="Spicy"),
HelpfulPhrase(indonesian="Teh manis", english="Sweet tea"),
HelpfulPhrase(indonesian="Nasi goreng", english="Fried rice")
],
is_helpful=True,
is_talkative=False,
uses_slang=True
),
"ibu_sari": Personality(
character_type=CharacterType.WARUNG_OWNER,
name="Ibu Sari",
gender=Gender.FEMALE,
tone=PersonalityTone.CHEERFUL,
age_range="middle-aged",
background="Friendly warung owner who likes to chat with customers",
typical_phrases=[
"Eh, mau apa Dek?",
"Udah laper ya?",
"Pedes level berapa?",
"Es teh manis?",
"Sebentar ya Dek",
"Nih, masih panas",
"Hati-hati ya"
],
response_style="Friendly but not overly formal, treats customers warmly",
location_context="Busy warung in residential area",
scenario_title="At a Warung",
scenario_description="You're at a local Indonesian warung (small restaurant) trying to order food and drinks. Practice ordering in Indonesian and navigating the casual dining experience.",
scenario_challenge="Understanding local food terminology, spice levels, and casual Indonesian conversation patterns. The owner is chatty and may engage in small talk.",
scenario_goal="Order nasi goreng pedas and teh manis",
goal_items=[
GoalItem(
id="order_nasi_goreng",
description="Order nasi goreng pedas"
),
GoalItem(
id="order_drink",
description="Order teh manis"
)
],
helpful_phrases=[
HelpfulPhrase(indonesian="Saya mau...", english="I want..."),
HelpfulPhrase(indonesian="Berapa harganya?", english="How much?"),
HelpfulPhrase(indonesian="Terima kasih", english="Thank you"),
HelpfulPhrase(indonesian="Pedas", english="Spicy"),
HelpfulPhrase(indonesian="Teh manis", english="Sweet tea"),
HelpfulPhrase(indonesian="Nasi goreng", english="Fried rice")
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
OJEK_PERSONALITIES = {
"mbak_sari": Personality(
character_type=CharacterType.OJEK_DRIVER,
name="Mbak Sari",
gender=Gender.FEMALE,
tone=PersonalityTone.CASUAL,
age_range="young",
background="Smart ojek driver who knows how to negotiate",
typical_phrases=[
"Kemana Mas?",
"Wah macet nih",
"Bensin naik lagi",
"Udah deket kok",
"Pegang yang kuat",
"Sampai deh",
"Ati-ati ya",
"Jangan bilang-bilang"
],
response_style="Direct and business-minded, mentions practical concerns",
location_context="Busy street corner",
scenario_title="Taking an Ojek",
scenario_description="You need to get a motorcycle taxi (ojek) to take you to the mall. Practice negotiating destination and price in Indonesian.",
scenario_challenge="Learning transportation vocabulary, price negotiation, and understanding Jakarta traffic concerns. The driver may try to charge tourist prices.",
scenario_goal="Negotiate ride to mall and agree on price",
goal_items=[
GoalItem(
id="state_destination",
description="Tell destination (mall)"
),
GoalItem(
id="agree_price",
description="Agree on price"
)
],
helpful_phrases=[
HelpfulPhrase(indonesian="Ke mall berapa?", english="How much to the mall?"),
HelpfulPhrase(indonesian="Mahal banget!", english="That's too expensive!"),
HelpfulPhrase(indonesian="Lima belas ribu boleh?", english="Is 15 thousand OK?"),
HelpfulPhrase(indonesian="Ayo!", english="Let's go!"),
HelpfulPhrase(indonesian="Ke mall", english="To the mall"),
HelpfulPhrase(indonesian="Berapa ongkosnya?", english="How much is the fare?")
],
is_helpful=True,
is_talkative=True,
uses_slang=True
)
}
CASHIER_PERSONALITIES = {
"adik_kasir": Personality(
character_type=CharacterType.CASHIER,
name="Adik Kasir",
gender=Gender.FEMALE,
tone=PersonalityTone.CASUAL,
age_range="young",
background="Young cashier who's chill and helpful",
typical_phrases=[
"Malam Kak",
"Beli apa?",
"Yang lain?",
"Pake kantong?",
"Total sekian",
"Kembaliannya",
"Makasih ya",
"Ati-ati"
],
response_style="Quick and efficient, gets the job done",
location_context="Alfamart convenience store",
scenario_title="At Alfamart",
scenario_description="You're shopping at Alfamart, a popular Indonesian convenience store chain. Practice buying everyday items and completing a transaction in Indonesian.",
scenario_challenge="Understanding convenience store vocabulary, payment interactions, and polite customer service language. Learn about Indonesian instant noodle brands and local products.",
scenario_goal="Buy Indomie and mineral water",
goal_items=[
GoalItem(
id="buy_indomie",
description="Buy Indomie"
),
GoalItem(
id="buy_water",
description="Buy mineral water"
)
],
helpful_phrases=[
HelpfulPhrase(indonesian="Saya mau beli...", english="I want to buy..."),
HelpfulPhrase(indonesian="Berapa totalnya?", english="How much is the total?"),
HelpfulPhrase(indonesian="Pake kantong", english="With a bag"),
HelpfulPhrase(indonesian="Bayar cash", english="Pay with cash"),
HelpfulPhrase(indonesian="Indomie", english="Indomie (instant noodles)"),
HelpfulPhrase(indonesian="Air mineral", english="Mineral water")
],
is_helpful=True,
is_talkative=False,
uses_slang=True
)
}
COFFEE_SHOP_PERSONALITIES = {
"tetangga_ali": Personality(
character_type=CharacterType.GENERIC,
name="Tetangga Ali",
gender=Gender.MALE,
tone=PersonalityTone.CHATTY,
age_range="middle-aged",
background="Friendly neighborhood guy who loves chatting with everyone about everything",
typical_phrases=[
"Eh, apa kabar?",
"Lagi ngapain nih?",
"Cuacanya panas banget ya hari ini",
"Udah makan belum?",
"Gimana kabar keluarga?",
"Kerja dimana sekarang?",
"Udah lama gak ketemu",
"Wah, sibuk banget ya",
"Ngomong-ngomong...",
"Oh iya, tau gak...",
"Kemarin aku ke...",
"Eh, kamu pernah ke...?"
],
response_style="Very talkative, asks lots of questions, shares stories, makes connections to random topics",
location_context="Local coffee shop in residential area",
scenario_title="Coffee Shop Small Talk",
scenario_description="You're at a local coffee shop and meet a very friendly neighbor who loves to chat. Practice making small talk in Indonesian - discussing weather, family, work, hobbies, and daily life.",
scenario_challenge="Learn natural small talk patterns, question-asking, and how to keep conversations flowing in Indonesian. Practice responding to personal questions and sharing about yourself.",
scenario_goal="Have a natural small talk conversation covering at least 3 different topics",
goal_items=[
GoalItem(
id="greet_and_respond",
description="Exchange greetings and ask how each other is doing"
),
GoalItem(
id="discuss_weather_daily_life",
description="Talk about weather, daily activities, or current situation"
),
GoalItem(
id="share_personal_info",
description="Share something about yourself (work, family, hobbies, etc.)"
),
GoalItem(
id="ask_followup_questions",
description="Ask follow-up questions to keep the conversation going"
)
],
helpful_phrases=[
HelpfulPhrase(indonesian="Apa kabar?", english="How are you?"),
HelpfulPhrase(indonesian="Baik-baik aja", english="I'm doing fine"),
HelpfulPhrase(indonesian="Lagi ngapain?", english="What are you up to?"),
HelpfulPhrase(indonesian="Cuacanya panas ya", english="The weather is hot, isn't it?"),
HelpfulPhrase(indonesian="Udah makan belum?", english="Have you eaten yet?"),
HelpfulPhrase(indonesian="Gimana kabar keluarga?", english="How's the family?"),
HelpfulPhrase(indonesian="Kerja dimana?", english="Where do you work?"),
HelpfulPhrase(indonesian="Ngomong-ngomong...", english="By the way..."),
HelpfulPhrase(indonesian="Oh iya...", english="Oh yes..."),
HelpfulPhrase(indonesian="Wah, menarik!", english="Wow, interesting!"),
HelpfulPhrase(indonesian="Bener juga ya", english="That's true"),
HelpfulPhrase(indonesian="Udah lama gak ketemu", english="Haven't seen you in a while")
],
is_helpful=True,
is_talkative=True,
is_impatient=False,
uses_slang=True
)
}
SCENARIO_PERSONALITIES = {
"warung": WARUNG_PERSONALITIES,
"ojek": OJEK_PERSONALITIES,
"alfamart": CASHIER_PERSONALITIES,
"coffee_shop": COFFEE_SHOP_PERSONALITIES
}

65
backend/pyproject.toml Normal file
View File

@ -0,0 +1,65 @@
[project]
name = "learn-indonesian-backend"
version = "0.1.0"
description = "FastAPI backend for Indonesian learning app"
authors = [
{name = "Your Name", email = "your.email@example.com"},
]
dependencies = [
"fastapi>=0.104.1",
"uvicorn>=0.24.0",
"pydantic>=2.5.0",
"python-multipart>=0.0.6",
"google-cloud-speech>=2.21.0",
"google-cloud-texttospeech>=2.14.2",
"openai>=1.0.0",
"websockets>=11.0.3",
"python-dotenv>=1.0.0",
]
requires-python = ">=3.11"
license = {text = "MIT"}
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["."]
[tool.ruff]
target-version = "py311"
line-length = 88
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
]
ignore = [
"E501", # line too long, handled by black
"B008", # do not perform function calls in argument defaults
"C901", # too complex
]
[tool.ruff.per-file-ignores]
"__init__.py" = ["F401"]
[tool.ruff.isort]
known-first-party = ["app"]
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.uv]
dev-dependencies = [
"ruff>=0.1.6",
"pytest>=7.4.3",
"pytest-asyncio>=0.21.1",
"httpx>=0.25.2",
]

507
backend/speech_service.py Normal file
View File

@ -0,0 +1,507 @@
import asyncio
import json
import os
import logging
from typing import AsyncGenerator, Dict, Any, Optional, List
import base64
from google.cloud import speech
from google.cloud import texttospeech
from google.api_core import exceptions
import openai
from config import config
from models import Personality, SCENARIO_PERSONALITIES, GoalItem, Gender
logger = logging.getLogger(__name__)
class SpeechToTextService:
def __init__(self):
self.client = speech.SpeechClient()
# Get encoding from config
encoding_map = {
"WEBM_OPUS": speech.RecognitionConfig.AudioEncoding.WEBM_OPUS,
"LINEAR16": speech.RecognitionConfig.AudioEncoding.LINEAR16,
"FLAC": speech.RecognitionConfig.AudioEncoding.FLAC,
"MULAW": speech.RecognitionConfig.AudioEncoding.MULAW,
"AMR": speech.RecognitionConfig.AudioEncoding.AMR,
"AMR_WB": speech.RecognitionConfig.AudioEncoding.AMR_WB,
"OGG_OPUS": speech.RecognitionConfig.AudioEncoding.OGG_OPUS,
"MP3": speech.RecognitionConfig.AudioEncoding.MP3,
}
self.recognition_config = speech.RecognitionConfig(
encoding=encoding_map.get(config.SPEECH_ENCODING, speech.RecognitionConfig.AudioEncoding.WEBM_OPUS),
sample_rate_hertz=config.SPEECH_SAMPLE_RATE,
language_code=config.SPEECH_LANGUAGE_CODE,
enable_automatic_punctuation=True,
use_enhanced=True,
model="latest_long",
)
self.streaming_config = speech.StreamingRecognitionConfig(
config=self.recognition_config,
interim_results=True,
single_utterance=False,
)
async def transcribe_streaming(self, audio_generator: AsyncGenerator[bytes, None]) -> AsyncGenerator[Dict[str, Any], None]:
"""Stream audio data to Google Cloud Speech-to-Text and yield transcription results."""
try:
async def request_generator():
# First request with config
yield speech.StreamingRecognizeRequest(streaming_config=self.streaming_config)
# Then audio requests
async for chunk in audio_generator:
yield speech.StreamingRecognizeRequest(audio_content=chunk)
responses = self.client.streaming_recognize(request_generator())
for response in responses:
for result in response.results:
transcript = result.alternatives[0].transcript
is_final = result.is_final
yield {
"type": "transcription",
"transcript": transcript,
"is_final": is_final,
"confidence": result.alternatives[0].confidence if is_final else 0.0
}
except exceptions.GoogleAPICallError as e:
yield {
"type": "error",
"message": f"Speech recognition error: {str(e)}"
}
class TextToSpeechService:
def __init__(self):
self.client = texttospeech.TextToSpeechClient()
# Gender mapping for Google TTS
self.gender_map = {
"FEMALE": texttospeech.SsmlVoiceGender.FEMALE,
"MALE": texttospeech.SsmlVoiceGender.MALE,
"NEUTRAL": texttospeech.SsmlVoiceGender.NEUTRAL,
"male": texttospeech.SsmlVoiceGender.MALE,
"female": texttospeech.SsmlVoiceGender.FEMALE,
}
def _get_voice_and_audio_config(self, gender: str, character_name: str = None) -> tuple:
"""Get appropriate voice and audio configuration based on gender."""
tts_gender = self.gender_map.get(gender, texttospeech.SsmlVoiceGender.FEMALE)
character_voice_map = {
"Pak Budi": {
"name": "id-ID-Chirp3-HD-Charon",
"speaking_rate": 0.95,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
"Ibu Sari": {
"name": "id-ID-Chirp3-HD-Kore",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Mbak Sari": {
"name": "id-ID-Chirp3-HD-Zephyr",
"speaking_rate": 1.1,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Adik Kasir": {
"name": "id-ID-Chirp3-HD-Aoede",
"speaking_rate": 1.05,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
},
"Tetangga Ali": {
"name": "id-ID-Chirp3-HD-Puck",
"speaking_rate": 1.05,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
}
}
gender_voice_fallback = {
texttospeech.SsmlVoiceGender.MALE: {
"name": "id-ID-Chirp3-HD-Fenrir",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.MALE,
},
texttospeech.SsmlVoiceGender.FEMALE: {
"name": "id-ID-Chirp3-HD-Leda",
"speaking_rate": 1.0,
"pitch": None,
"ssml_gender": texttospeech.SsmlVoiceGender.FEMALE,
}
}
config_set = None
if character_name and character_name in character_voice_map:
config_set = character_voice_map[character_name]
logger.info(f"Using character-specific voice for '{character_name}': {config_set['name']}")
if not config_set:
config_set = gender_voice_fallback.get(tts_gender, gender_voice_fallback[texttospeech.SsmlVoiceGender.FEMALE])
logger.info(f"Using gender fallback voice for {tts_gender}: {config_set['name']}")
voice = texttospeech.VoiceSelectionParams(
language_code=config.TTS_LANGUAGE_CODE,
name=config_set["name"],
ssml_gender=config_set["ssml_gender"],
)
audio_config_params = {
"audio_encoding": texttospeech.AudioEncoding.LINEAR16,
"speaking_rate": config_set["speaking_rate"],
"effects_profile_id": ['handset-class-device'],
}
if config_set["pitch"] is not None:
audio_config_params["pitch"] = config_set["pitch"]
audio_config = texttospeech.AudioConfig(**audio_config_params)
return voice, audio_config
async def synthesize_speech(self, text: str, gender: str = "female", character_name: str = None) -> bytes:
"""Convert text to speech using Google Cloud Text-to-Speech with natural, conversational voice."""
try:
logger.info(f"TTS synthesize_speech called with text: '{text}', gender: '{gender}', character: '{character_name}'")
voice, audio_config = self._get_voice_and_audio_config(gender, character_name)
logger.info(f"Using voice: {voice.name}, requested gender: '{gender}', mapped TTS gender: {voice.ssml_gender}")
synthesis_input = texttospeech.SynthesisInput(text=text)
response = self.client.synthesize_speech(
input=synthesis_input,
voice=voice,
audio_config=audio_config,
)
logger.info(f"TTS successful, audio length: {len(response.audio_content)} bytes")
return response.audio_content
except exceptions.GoogleAPICallError as e:
logger.error(f"Text-to-speech error: {str(e)}")
raise Exception(f"Text-to-speech error: {str(e)}")
class AIConversationService:
def __init__(self):
self.client = openai.OpenAI(api_key=config.OPENAI_API_KEY)
self.model = config.OPENAI_MODEL
self.current_personality: Optional[Personality] = None
self.conversation_history: List[Dict[str, str]] = []
self.goal_progress: List[GoalItem] = []
def set_personality(self, personality: Personality):
"""Set the current personality for the conversation."""
self.current_personality = personality
# Reset conversation history when personality changes
self.conversation_history = []
# Initialize goal progress
self.goal_progress = [GoalItem(**item.dict()) for item in personality.goal_items]
def reset_conversation(self):
"""Reset the conversation history."""
self.conversation_history = []
# Reset goal progress
if self.current_personality:
self.goal_progress = [GoalItem(**item.dict()) for item in self.current_personality.goal_items]
def get_personality_for_scenario(self, scenario: str, character_name: str = None) -> Personality:
"""Get personality based on scenario and optional character name."""
if scenario in SCENARIO_PERSONALITIES:
personalities = SCENARIO_PERSONALITIES[scenario]
if character_name and character_name in personalities:
return personalities[character_name]
else:
# Return first personality if no specific character requested
return list(personalities.values())[0]
# Return default personality if scenario not found
return Personality(
character_type="generic",
name="Pak/Bu",
tone="friendly",
age_range="middle-aged",
background="Helpful Indonesian person",
typical_phrases=["Halo!", "Apa kabar?", "Bisa saya bantu?"],
response_style="Friendly and helpful",
location_context="Indonesia",
is_helpful=True,
is_talkative=True
)
async def check_goal_completion(self, user_message: str, ai_response: str) -> bool:
"""Check if any goals are completed using LLM judge."""
if not self.goal_progress:
return False
goals_completed = False
# Only check goals that aren't already completed
incomplete_goals = [g for g in self.goal_progress if not g.completed]
if not incomplete_goals:
return False
logger.info(f"Checking goal completion for user message: '{user_message}'")
logger.info(f"Incomplete goals: {[g.description for g in incomplete_goals]}")
conversation_context = ""
for exchange in self.conversation_history[-3:]:
conversation_context += f"User: {exchange['user']}\nAI: {exchange['assistant']}\n"
for goal in incomplete_goals:
logger.info(f"Checking goal: '{goal.description}'")
completion_check = await self._judge_goal_completion(
goal,
user_message,
ai_response,
conversation_context
)
if completion_check:
goal.completed = True
goals_completed = True
logger.info(f"✅ Goal completed: {goal.description}")
else:
logger.info(f"❌ Goal not completed: {goal.description}")
return goals_completed
async def _judge_goal_completion(self, goal: GoalItem, user_message: str, ai_response: str, conversation_context: str) -> bool:
"""Use LLM to judge if a specific goal was completed."""
try:
if "order" in goal.description.lower() or "buy" in goal.description.lower():
judge_prompt = f"""You are a strict judge determining if a specific goal was FULLY completed in a conversation.
GOAL TO CHECK: {goal.description}
RECENT CONVERSATION CONTEXT:
{conversation_context}
LATEST EXCHANGE:
User: {user_message}
AI: {ai_response}
CRITICAL RULES FOR ORDERING GOALS:
1. ONLY return "YES" if the user has COMPLETELY finished this exact goal
2. Return "NO" if the goal is partial, incomplete, or just being discussed
3. For "Order [item]" goals: user must explicitly say they want/order that EXACT item with ALL specifications
4. For drink goals: user must specifically mention wanting/ordering a drink
5. Don't mark as complete just because the AI is asking about it
Answer ONLY "YES" or "NO":"""
else:
judge_prompt = f"""You are judging if a conversational goal was completed in a natural small talk scenario.
GOAL TO CHECK: {goal.description}
RECENT CONVERSATION CONTEXT:
{conversation_context}
LATEST EXCHANGE:
User: {user_message}
AI: {ai_response}
RULES FOR SMALL TALK GOALS:
1. Return "YES" if the user has naturally accomplished this conversational goal ANYWHERE in the conversation
2. For "Share something about yourself" goals: Look through the ENTIRE conversation for work, family, hobbies, personal interests, financial situation, dreams, etc.
3. For "Ask follow-up questions" goals: user asks questions to continue conversation
4. For "Exchange greetings" goals: user greets or responds to greetings
5. For "Discuss weather/daily life" goals: user talks about weather, daily activities, current events
6. Goals can be completed through natural conversation flow, not just direct statements
7. IMPORTANT: Check the FULL conversation context, not just the latest exchange
EXAMPLES:
- Goal: "Share something about yourself (work, family, hobbies, etc.)"
- User mentions work: "sibuk banget di kantor sering lembur" YES (work situation)
- User mentions finances: "nggak punya duit" YES (personal finance)
- User mentions hobbies: "sukanya ke Afrika" YES (travel interests)
- User mentions dreams: "Belum pernah mimpi aja dulu sih" YES (personal aspirations)
- User just greets: "Baik nih" NO (just greeting, no personal info)
- Goal: "Ask follow-up questions to keep the conversation going"
- User: "Mas Ali suka lari juga gak?" YES (asking follow-up question)
- User: "Gimana kabar keluarga?" YES (asking about family)
- User: "Iya" NO (just responding, not asking)
Be reasonable and natural - small talk goals should be completed through normal conversation.
SCAN THE ENTIRE CONVERSATION, not just the latest message.
Answer ONLY "YES" or "NO":"""
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": judge_prompt}],
max_tokens=5,
temperature=0.1, # Low temperature for consistent judging
)
result = response.choices[0].message.content.strip().upper()
logger.info(f"Goal judge result for '{goal.description}': {result}")
return result == "YES"
except Exception as e:
logger.error(f"Error in goal completion judge: {str(e)}")
return False
def are_all_goals_completed(self) -> bool:
"""Check if all goals are completed."""
return all(goal.completed for goal in self.goal_progress)
def get_goal_status(self) -> Dict[str, Any]:
"""Get current goal status."""
return {
"scenario_goal": self.current_personality.scenario_goal if self.current_personality else "",
"goal_items": [
{
"id": goal.id,
"description": goal.description,
"completed": goal.completed
} for goal in self.goal_progress
],
"all_completed": self.are_all_goals_completed()
}
async def get_response(self, user_message: str, context: str = "") -> str:
"""Get AI response to user message using current personality and conversation history."""
try:
# Use current personality or default
if not self.current_personality:
default_personality = self.get_personality_for_scenario("warung", "pak_budi")
self.set_personality(default_personality)
system_prompt = self.current_personality.get_system_prompt(context)
# Build messages with conversation history
messages = [{"role": "system", "content": system_prompt}]
# Add conversation history (keep last 15 exchanges for better chitchat context)
recent_history = self.conversation_history[-15:] if len(self.conversation_history) > 15 else self.conversation_history
for exchange in recent_history:
messages.append({"role": "user", "content": exchange["user"]})
messages.append({"role": "assistant", "content": exchange["assistant"]})
# Add current user message
messages.append({"role": "user", "content": user_message})
logger.info(f"Sending {len(messages)} messages to AI:")
for i, msg in enumerate(messages):
if msg["role"] == "system":
logger.info(f" {i}: SYSTEM (length: {len(msg['content'])})")
else:
logger.info(f" {i}: {msg['role'].upper()}: '{msg['content']}'")
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
max_tokens=250,
temperature=0.7,
)
ai_response = response.choices[0].message.content
self.conversation_history.append({
"user": user_message,
"assistant": ai_response
})
await self.check_goal_completion(user_message, ai_response)
logger.info(f"Conversation history length: {len(self.conversation_history)}")
if len(self.conversation_history) > 0:
logger.info(f"Last exchange - User: '{self.conversation_history[-1]['user']}', AI: '{self.conversation_history[-1]['assistant']}'")
if self.goal_progress:
completed_goals = [g.description for g in self.goal_progress if g.completed]
logger.info(f"Completed goals: {completed_goals}")
logger.info(f"All goals completed: {self.are_all_goals_completed()}")
return ai_response
except Exception as e:
return f"Maaf, ada error: {str(e)}"
class ConversationFlowService:
def __init__(self):
self.stt_service = SpeechToTextService()
self.tts_service = TextToSpeechService()
self.ai_service = AIConversationService()
def set_scenario_personality(self, scenario: str, character_name: str = None):
"""Set the personality based on scenario and character."""
personality = self.ai_service.get_personality_for_scenario(scenario, character_name)
if not self.ai_service.current_personality or self.ai_service.current_personality.name != personality.name:
logger.info(f"Setting new personality: {personality.name}")
self.ai_service.set_personality(personality)
logger.info("Goal progress initialized for new personality")
else:
logger.info(f"Keeping existing personality: {personality.name}")
async def process_conversation_flow(self, transcribed_text: str, scenario_context: str = "") -> Dict[str, Any]:
"""Process the complete conversation flow: Text → AI → Speech."""
try:
scenario = self.extract_scenario_from_context(scenario_context)
if scenario:
self.set_scenario_personality(scenario)
ai_response = await self.ai_service.get_response(transcribed_text, scenario_context)
gender = self.ai_service.current_personality.gender.value if self.ai_service.current_personality else "female"
personality_name = self.ai_service.current_personality.name if self.ai_service.current_personality else "Unknown"
logger.info(f"Generating TTS for character '{personality_name}' with text: '{ai_response}' and gender: '{gender}'")
audio_content = await self.tts_service.synthesize_speech(ai_response, gender, personality_name)
logger.info(f"TTS generation successful, audio length: {len(audio_content)} bytes")
audio_base64 = base64.b64encode(audio_content).decode('utf-8')
goal_status = self.ai_service.get_goal_status()
return {
"type": "ai_response",
"text": ai_response,
"audio": audio_base64,
"audio_format": "mp3",
"character": self.ai_service.current_personality.name if self.ai_service.current_personality else "Unknown",
"goal_status": goal_status,
"conversation_complete": goal_status.get("all_completed", False)
}
except Exception as e:
return {
"type": "error",
"message": f"Conversation flow error: {str(e)}"
}
def extract_scenario_from_context(self, context: str) -> str:
"""Extract scenario type from context string."""
logger.info(f"Extracting scenario from context: '{context}'")
context_lower = context.lower()
detected_scenario = None
if "coffee_shop" in context_lower or "coffee" in context_lower:
detected_scenario = "coffee_shop"
elif "warung" in context_lower or "nasi goreng" in context_lower:
detected_scenario = "warung"
elif "ojek" in context_lower or "mall" in context_lower:
detected_scenario = "ojek"
elif "alfamart" in context_lower or "indomie" in context_lower:
detected_scenario = "alfamart"
else:
detected_scenario = "warung" # Default to warung
logger.info(f"Detected scenario: '{detected_scenario}'")
return detected_scenario

819
backend/uv.lock Normal file
View File

@ -0,0 +1,819 @@
version = 1
requires-python = ">=3.11"
resolution-markers = [
"python_full_version >= '3.13'",
"python_full_version < '3.13'",
]
[[package]]
name = "annotated-types"
version = "0.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 },
]
[[package]]
name = "anyio"
version = "4.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/95/7d/4c1bd541d4dffa1b52bd83fb8527089e097a106fc90b467a7313b105f840/anyio-4.9.0.tar.gz", hash = "sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028", size = 190949 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/ee/48ca1a7c89ffec8b6a0c5d02b89c305671d5ffd8d3c94acf8b8c408575bb/anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c", size = 100916 },
]
[[package]]
name = "cachetools"
version = "5.5.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/6c/81/3747dad6b14fa2cf53fcf10548cf5aea6913e96fab41a3c198676f8948a5/cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4", size = 28380 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/72/76/20fa66124dbe6be5cafeb312ece67de6b61dd91a0247d1ea13db4ebb33c2/cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a", size = 10080 },
]
[[package]]
name = "certifi"
version = "2025.7.14"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b3/76/52c535bcebe74590f296d6c77c86dabf761c41980e1347a2422e4aa2ae41/certifi-2025.7.14.tar.gz", hash = "sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995", size = 163981 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4f/52/34c6cf5bb9285074dc3531c437b3919e825d976fde097a7a73f79e726d03/certifi-2025.7.14-py3-none-any.whl", hash = "sha256:6b31f564a415d79ee77df69d757bb49a5bb53bd9f756cbbe24394ffd6fc1f4b2", size = 162722 },
]
[[package]]
name = "charset-normalizer"
version = "3.4.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e4/33/89c2ced2b67d1c2a61c19c6751aa8902d46ce3dacb23600a283619f5a12d/charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63", size = 126367 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/05/85/4c40d00dcc6284a1c1ad5de5e0996b06f39d8232f1031cd23c2f5c07ee86/charset_normalizer-3.4.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2", size = 198794 },
{ url = "https://files.pythonhosted.org/packages/41/d9/7a6c0b9db952598e97e93cbdfcb91bacd89b9b88c7c983250a77c008703c/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645", size = 142846 },
{ url = "https://files.pythonhosted.org/packages/66/82/a37989cda2ace7e37f36c1a8ed16c58cf48965a79c2142713244bf945c89/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd", size = 153350 },
{ url = "https://files.pythonhosted.org/packages/df/68/a576b31b694d07b53807269d05ec3f6f1093e9545e8607121995ba7a8313/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8", size = 145657 },
{ url = "https://files.pythonhosted.org/packages/92/9b/ad67f03d74554bed3aefd56fe836e1623a50780f7c998d00ca128924a499/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f", size = 147260 },
{ url = "https://files.pythonhosted.org/packages/a6/e6/8aebae25e328160b20e31a7e9929b1578bbdc7f42e66f46595a432f8539e/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7", size = 149164 },
{ url = "https://files.pythonhosted.org/packages/8b/f2/b3c2f07dbcc248805f10e67a0262c93308cfa149a4cd3d1fe01f593e5fd2/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9", size = 144571 },
{ url = "https://files.pythonhosted.org/packages/60/5b/c3f3a94bc345bc211622ea59b4bed9ae63c00920e2e8f11824aa5708e8b7/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544", size = 151952 },
{ url = "https://files.pythonhosted.org/packages/e2/4d/ff460c8b474122334c2fa394a3f99a04cf11c646da895f81402ae54f5c42/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82", size = 155959 },
{ url = "https://files.pythonhosted.org/packages/a2/2b/b964c6a2fda88611a1fe3d4c400d39c66a42d6c169c924818c848f922415/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0", size = 153030 },
{ url = "https://files.pythonhosted.org/packages/59/2e/d3b9811db26a5ebf444bc0fa4f4be5aa6d76fc6e1c0fd537b16c14e849b6/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5", size = 148015 },
{ url = "https://files.pythonhosted.org/packages/90/07/c5fd7c11eafd561bb51220d600a788f1c8d77c5eef37ee49454cc5c35575/charset_normalizer-3.4.2-cp311-cp311-win32.whl", hash = "sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a", size = 98106 },
{ url = "https://files.pythonhosted.org/packages/a8/05/5e33dbef7e2f773d672b6d79f10ec633d4a71cd96db6673625838a4fd532/charset_normalizer-3.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28", size = 105402 },
{ url = "https://files.pythonhosted.org/packages/d7/a4/37f4d6035c89cac7930395a35cc0f1b872e652eaafb76a6075943754f095/charset_normalizer-3.4.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0c29de6a1a95f24b9a1aa7aefd27d2487263f00dfd55a77719b530788f75cff7", size = 199936 },
{ url = "https://files.pythonhosted.org/packages/ee/8a/1a5e33b73e0d9287274f899d967907cd0bf9c343e651755d9307e0dbf2b3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cddf7bd982eaa998934a91f69d182aec997c6c468898efe6679af88283b498d3", size = 143790 },
{ url = "https://files.pythonhosted.org/packages/66/52/59521f1d8e6ab1482164fa21409c5ef44da3e9f653c13ba71becdd98dec3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcbe676a55d7445b22c10967bceaaf0ee69407fbe0ece4d032b6eb8d4565982a", size = 153924 },
{ url = "https://files.pythonhosted.org/packages/86/2d/fb55fdf41964ec782febbf33cb64be480a6b8f16ded2dbe8db27a405c09f/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d41c4d287cfc69060fa91cae9683eacffad989f1a10811995fa309df656ec214", size = 146626 },
{ url = "https://files.pythonhosted.org/packages/8c/73/6ede2ec59bce19b3edf4209d70004253ec5f4e319f9a2e3f2f15601ed5f7/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e594135de17ab3866138f496755f302b72157d115086d100c3f19370839dd3a", size = 148567 },
{ url = "https://files.pythonhosted.org/packages/09/14/957d03c6dc343c04904530b6bef4e5efae5ec7d7990a7cbb868e4595ee30/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf713fe9a71ef6fd5adf7a79670135081cd4431c2943864757f0fa3a65b1fafd", size = 150957 },
{ url = "https://files.pythonhosted.org/packages/0d/c8/8174d0e5c10ccebdcb1b53cc959591c4c722a3ad92461a273e86b9f5a302/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a370b3e078e418187da8c3674eddb9d983ec09445c99a3a263c2011993522981", size = 145408 },
{ url = "https://files.pythonhosted.org/packages/58/aa/8904b84bc8084ac19dc52feb4f5952c6df03ffb460a887b42615ee1382e8/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a955b438e62efdf7e0b7b52a64dc5c3396e2634baa62471768a64bc2adb73d5c", size = 153399 },
{ url = "https://files.pythonhosted.org/packages/c2/26/89ee1f0e264d201cb65cf054aca6038c03b1a0c6b4ae998070392a3ce605/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:7222ffd5e4de8e57e03ce2cef95a4c43c98fcb72ad86909abdfc2c17d227fc1b", size = 156815 },
{ url = "https://files.pythonhosted.org/packages/fd/07/68e95b4b345bad3dbbd3a8681737b4338ff2c9df29856a6d6d23ac4c73cb/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bee093bf902e1d8fc0ac143c88902c3dfc8941f7ea1d6a8dd2bcb786d33db03d", size = 154537 },
{ url = "https://files.pythonhosted.org/packages/77/1a/5eefc0ce04affb98af07bc05f3bac9094513c0e23b0562d64af46a06aae4/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:dedb8adb91d11846ee08bec4c8236c8549ac721c245678282dcb06b221aab59f", size = 149565 },
{ url = "https://files.pythonhosted.org/packages/37/a0/2410e5e6032a174c95e0806b1a6585eb21e12f445ebe239fac441995226a/charset_normalizer-3.4.2-cp312-cp312-win32.whl", hash = "sha256:db4c7bf0e07fc3b7d89ac2a5880a6a8062056801b83ff56d8464b70f65482b6c", size = 98357 },
{ url = "https://files.pythonhosted.org/packages/6c/4f/c02d5c493967af3eda9c771ad4d2bbc8df6f99ddbeb37ceea6e8716a32bc/charset_normalizer-3.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:5a9979887252a82fefd3d3ed2a8e3b937a7a809f65dcb1e068b090e165bbe99e", size = 105776 },
{ url = "https://files.pythonhosted.org/packages/ea/12/a93df3366ed32db1d907d7593a94f1fe6293903e3e92967bebd6950ed12c/charset_normalizer-3.4.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0", size = 199622 },
{ url = "https://files.pythonhosted.org/packages/04/93/bf204e6f344c39d9937d3c13c8cd5bbfc266472e51fc8c07cb7f64fcd2de/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf", size = 143435 },
{ url = "https://files.pythonhosted.org/packages/22/2a/ea8a2095b0bafa6c5b5a55ffdc2f924455233ee7b91c69b7edfcc9e02284/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e", size = 153653 },
{ url = "https://files.pythonhosted.org/packages/b6/57/1b090ff183d13cef485dfbe272e2fe57622a76694061353c59da52c9a659/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1", size = 146231 },
{ url = "https://files.pythonhosted.org/packages/e2/28/ffc026b26f441fc67bd21ab7f03b313ab3fe46714a14b516f931abe1a2d8/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c", size = 148243 },
{ url = "https://files.pythonhosted.org/packages/c0/0f/9abe9bd191629c33e69e47c6ef45ef99773320e9ad8e9cb08b8ab4a8d4cb/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691", size = 150442 },
{ url = "https://files.pythonhosted.org/packages/67/7c/a123bbcedca91d5916c056407f89a7f5e8fdfce12ba825d7d6b9954a1a3c/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0", size = 145147 },
{ url = "https://files.pythonhosted.org/packages/ec/fe/1ac556fa4899d967b83e9893788e86b6af4d83e4726511eaaad035e36595/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b", size = 153057 },
{ url = "https://files.pythonhosted.org/packages/2b/ff/acfc0b0a70b19e3e54febdd5301a98b72fa07635e56f24f60502e954c461/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff", size = 156454 },
{ url = "https://files.pythonhosted.org/packages/92/08/95b458ce9c740d0645feb0e96cea1f5ec946ea9c580a94adfe0b617f3573/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b", size = 154174 },
{ url = "https://files.pythonhosted.org/packages/78/be/8392efc43487ac051eee6c36d5fbd63032d78f7728cb37aebcc98191f1ff/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148", size = 149166 },
{ url = "https://files.pythonhosted.org/packages/44/96/392abd49b094d30b91d9fbda6a69519e95802250b777841cf3bda8fe136c/charset_normalizer-3.4.2-cp313-cp313-win32.whl", hash = "sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7", size = 98064 },
{ url = "https://files.pythonhosted.org/packages/e9/b0/0200da600134e001d91851ddc797809e2fe0ea72de90e09bec5a2fbdaccb/charset_normalizer-3.4.2-cp313-cp313-win_amd64.whl", hash = "sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980", size = 105641 },
{ url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626 },
]
[[package]]
name = "click"
version = "8.2.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/60/6c/8ca2efa64cf75a977a0d7fac081354553ebe483345c734fb6b6515d96bbc/click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", size = 286342 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/85/32/10bb5764d90a8eee674e9dc6f4db6a0ab47c8c4d0d83c27f7c39ac415a4d/click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b", size = 102215 },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
[[package]]
name = "distro"
version = "1.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/fc/f8/98eea607f65de6527f8a2e8885fc8015d3e6f5775df186e443e0964a11c3/distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed", size = 60722 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/12/b3/231ffd4ab1fc9d679809f356cebee130ac7daa00d6d6f3206dd4fd137e9e/distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2", size = 20277 },
]
[[package]]
name = "fastapi"
version = "0.116.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pydantic" },
{ name = "starlette" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/78/d7/6c8b3bfe33eeffa208183ec037fee0cce9f7f024089ab1c5d12ef04bd27c/fastapi-0.116.1.tar.gz", hash = "sha256:ed52cbf946abfd70c5a0dccb24673f0670deeb517a88b3544d03c2a6bf283143", size = 296485 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/47/d63c60f59a59467fda0f93f46335c9d18526d7071f025cb5b89d5353ea42/fastapi-0.116.1-py3-none-any.whl", hash = "sha256:c46ac7c312df840f0c9e220f7964bada936781bc4e2e6eb71f1c4d7553786565", size = 95631 },
]
[[package]]
name = "google-api-core"
version = "2.25.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "google-auth" },
{ name = "googleapis-common-protos" },
{ name = "proto-plus" },
{ name = "protobuf" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/dc/21/e9d043e88222317afdbdb567165fdbc3b0aad90064c7e0c9eb0ad9955ad8/google_api_core-2.25.1.tar.gz", hash = "sha256:d2aaa0b13c78c61cb3f4282c464c046e45fbd75755683c9c525e6e8f7ed0a5e8", size = 165443 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/14/4b/ead00905132820b623732b175d66354e9d3e69fcf2a5dcdab780664e7896/google_api_core-2.25.1-py3-none-any.whl", hash = "sha256:8a2a56c1fef82987a524371f99f3bd0143702fecc670c72e600c1cda6bf8dbb7", size = 160807 },
]
[package.optional-dependencies]
grpc = [
{ name = "grpcio" },
{ name = "grpcio-status" },
]
[[package]]
name = "google-auth"
version = "2.40.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cachetools" },
{ name = "pyasn1-modules" },
{ name = "rsa" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9e/9b/e92ef23b84fa10a64ce4831390b7a4c2e53c0132568d99d4ae61d04c8855/google_auth-2.40.3.tar.gz", hash = "sha256:500c3a29adedeb36ea9cf24b8d10858e152f2412e3ca37829b3fa18e33d63b77", size = 281029 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/17/63/b19553b658a1692443c62bd07e5868adaa0ad746a0751ba62c59568cd45b/google_auth-2.40.3-py2.py3-none-any.whl", hash = "sha256:1370d4593e86213563547f97a92752fc658456fe4514c809544f330fed45a7ca", size = 216137 },
]
[[package]]
name = "google-cloud-speech"
version = "2.33.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "google-api-core", extra = ["grpc"] },
{ name = "google-auth" },
{ name = "proto-plus" },
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9a/74/9c5a556f8af19cab461058aa15e1409e7afa453ca2383473a24a12801ef7/google_cloud_speech-2.33.0.tar.gz", hash = "sha256:fd08511b5124fdaa768d71a4054e84a5d8eb02531cb6f84f311c0387ea1314ed", size = 389072 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/12/1d/880342b2541b4bad888ad8ab2ac77d4b5dad25b32a2a1c5f21140c14c8e3/google_cloud_speech-2.33.0-py3-none-any.whl", hash = "sha256:4ba16c8517c24a6abcde877289b0f40b719090504bf06b1adea248198ccd50a5", size = 335681 },
]
[[package]]
name = "google-cloud-texttospeech"
version = "2.27.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "google-api-core", extra = ["grpc"] },
{ name = "google-auth" },
{ name = "proto-plus" },
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3b/65/0873b430c2ad885bde9649bfcdc9e87dca0ad400da4ff1495f62911baa36/google_cloud_texttospeech-2.27.0.tar.gz", hash = "sha256:94a382c95b7cc58efd2505a24c2968e2614fc6bdf9d76fb9a819d4ed29ae188e", size = 182332 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d8/40/1560257fb77b601c0801b370411e0b849b278a90e6232b46c5b84489fb67/google_cloud_texttospeech-2.27.0-py3-none-any.whl", hash = "sha256:0f7c5fe05281beb6d005ea191f61c913085e8439e5ffd2d5d21e29d106150b54", size = 189408 },
]
[[package]]
name = "googleapis-common-protos"
version = "1.70.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/39/24/33db22342cf4a2ea27c9955e6713140fedd51e8b141b5ce5260897020f1a/googleapis_common_protos-1.70.0.tar.gz", hash = "sha256:0e1b44e0ea153e6594f9f394fef15193a68aaaea2d843f83e2742717ca753257", size = 145903 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/86/f1/62a193f0227cf15a920390abe675f386dec35f7ae3ffe6da582d3ade42c7/googleapis_common_protos-1.70.0-py3-none-any.whl", hash = "sha256:b8bfcca8c25a2bb253e0e0b0adaf8c00773e5e6af6fd92397576680b807e0fd8", size = 294530 },
]
[[package]]
name = "grpcio"
version = "1.73.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/79/e8/b43b851537da2e2f03fa8be1aef207e5cbfb1a2e014fbb6b40d24c177cd3/grpcio-1.73.1.tar.gz", hash = "sha256:7fce2cd1c0c1116cf3850564ebfc3264fba75d3c74a7414373f1238ea365ef87", size = 12730355 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e4/41/921565815e871d84043e73e2c0e748f0318dab6fa9be872cd042778f14a9/grpcio-1.73.1-cp311-cp311-linux_armv7l.whl", hash = "sha256:ba2cea9f7ae4bc21f42015f0ec98f69ae4179848ad744b210e7685112fa507a1", size = 5363853 },
{ url = "https://files.pythonhosted.org/packages/b0/cc/9c51109c71d068e4d474becf5f5d43c9d63038cec1b74112978000fa72f4/grpcio-1.73.1-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:d74c3f4f37b79e746271aa6cdb3a1d7e4432aea38735542b23adcabaaee0c097", size = 10621476 },
{ url = "https://files.pythonhosted.org/packages/8f/d3/33d738a06f6dbd4943f4d377468f8299941a7c8c6ac8a385e4cef4dd3c93/grpcio-1.73.1-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:5b9b1805a7d61c9e90541cbe8dfe0a593dfc8c5c3a43fe623701b6a01b01d710", size = 5807903 },
{ url = "https://files.pythonhosted.org/packages/5d/47/36deacd3c967b74e0265f4c608983e897d8bb3254b920f8eafdf60e4ad7e/grpcio-1.73.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b3215f69a0670a8cfa2ab53236d9e8026bfb7ead5d4baabe7d7dc11d30fda967", size = 6448172 },
{ url = "https://files.pythonhosted.org/packages/0e/64/12d6dc446021684ee1428ea56a3f3712048a18beeadbdefa06e6f8814a6e/grpcio-1.73.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc5eccfd9577a5dc7d5612b2ba90cca4ad14c6d949216c68585fdec9848befb1", size = 6044226 },
{ url = "https://files.pythonhosted.org/packages/72/4b/6bae2d88a006000f1152d2c9c10ffd41d0131ca1198e0b661101c2e30ab9/grpcio-1.73.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dc7d7fd520614fce2e6455ba89791458020a39716951c7c07694f9dbae28e9c0", size = 6135690 },
{ url = "https://files.pythonhosted.org/packages/38/64/02c83b5076510784d1305025e93e0d78f53bb6a0213c8c84cfe8a00c5c48/grpcio-1.73.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:105492124828911f85127e4825d1c1234b032cb9d238567876b5515d01151379", size = 6775867 },
{ url = "https://files.pythonhosted.org/packages/42/72/a13ff7ba6c68ccffa35dacdc06373a76c0008fd75777cba84d7491956620/grpcio-1.73.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:610e19b04f452ba6f402ac9aa94eb3d21fbc94553368008af634812c4a85a99e", size = 6308380 },
{ url = "https://files.pythonhosted.org/packages/65/ae/d29d948021faa0070ec33245c1ae354e2aefabd97e6a9a7b6dcf0fb8ef6b/grpcio-1.73.1-cp311-cp311-win32.whl", hash = "sha256:d60588ab6ba0ac753761ee0e5b30a29398306401bfbceffe7d68ebb21193f9d4", size = 3679139 },
{ url = "https://files.pythonhosted.org/packages/af/66/e1bbb0c95ea222947f0829b3db7692c59b59bcc531df84442e413fa983d9/grpcio-1.73.1-cp311-cp311-win_amd64.whl", hash = "sha256:6957025a4608bb0a5ff42abd75bfbb2ed99eda29d5992ef31d691ab54b753643", size = 4342558 },
{ url = "https://files.pythonhosted.org/packages/b8/41/456caf570c55d5ac26f4c1f2db1f2ac1467d5bf3bcd660cba3e0a25b195f/grpcio-1.73.1-cp312-cp312-linux_armv7l.whl", hash = "sha256:921b25618b084e75d424a9f8e6403bfeb7abef074bb6c3174701e0f2542debcf", size = 5334621 },
{ url = "https://files.pythonhosted.org/packages/2a/c2/9a15e179e49f235bb5e63b01590658c03747a43c9775e20c4e13ca04f4c4/grpcio-1.73.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:277b426a0ed341e8447fbf6c1d6b68c952adddf585ea4685aa563de0f03df887", size = 10601131 },
{ url = "https://files.pythonhosted.org/packages/0c/1d/1d39e90ef6348a0964caa7c5c4d05f3bae2c51ab429eb7d2e21198ac9b6d/grpcio-1.73.1-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:96c112333309493c10e118d92f04594f9055774757f5d101b39f8150f8c25582", size = 5759268 },
{ url = "https://files.pythonhosted.org/packages/8a/2b/2dfe9ae43de75616177bc576df4c36d6401e0959833b2e5b2d58d50c1f6b/grpcio-1.73.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f48e862aed925ae987eb7084409a80985de75243389dc9d9c271dd711e589918", size = 6409791 },
{ url = "https://files.pythonhosted.org/packages/6e/66/e8fe779b23b5a26d1b6949e5c70bc0a5fd08f61a6ec5ac7760d589229511/grpcio-1.73.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83a6c2cce218e28f5040429835fa34a29319071079e3169f9543c3fbeff166d2", size = 6003728 },
{ url = "https://files.pythonhosted.org/packages/a9/39/57a18fcef567784108c4fc3f5441cb9938ae5a51378505aafe81e8e15ecc/grpcio-1.73.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:65b0458a10b100d815a8426b1442bd17001fdb77ea13665b2f7dc9e8587fdc6b", size = 6103364 },
{ url = "https://files.pythonhosted.org/packages/c5/46/28919d2aa038712fc399d02fa83e998abd8c1f46c2680c5689deca06d1b2/grpcio-1.73.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:0a9f3ea8dce9eae9d7cb36827200133a72b37a63896e0e61a9d5ec7d61a59ab1", size = 6749194 },
{ url = "https://files.pythonhosted.org/packages/3d/56/3898526f1fad588c5d19a29ea0a3a4996fb4fa7d7c02dc1be0c9fd188b62/grpcio-1.73.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:de18769aea47f18e782bf6819a37c1c528914bfd5683b8782b9da356506190c8", size = 6283902 },
{ url = "https://files.pythonhosted.org/packages/dc/64/18b77b89c5870d8ea91818feb0c3ffb5b31b48d1b0ee3e0f0d539730fea3/grpcio-1.73.1-cp312-cp312-win32.whl", hash = "sha256:24e06a5319e33041e322d32c62b1e728f18ab8c9dbc91729a3d9f9e3ed336642", size = 3668687 },
{ url = "https://files.pythonhosted.org/packages/3c/52/302448ca6e52f2a77166b2e2ed75f5d08feca4f2145faf75cb768cccb25b/grpcio-1.73.1-cp312-cp312-win_amd64.whl", hash = "sha256:303c8135d8ab176f8038c14cc10d698ae1db9c480f2b2823f7a987aa2a4c5646", size = 4334887 },
{ url = "https://files.pythonhosted.org/packages/37/bf/4ca20d1acbefabcaba633ab17f4244cbbe8eca877df01517207bd6655914/grpcio-1.73.1-cp313-cp313-linux_armv7l.whl", hash = "sha256:b310824ab5092cf74750ebd8a8a8981c1810cb2b363210e70d06ef37ad80d4f9", size = 5335615 },
{ url = "https://files.pythonhosted.org/packages/75/ed/45c345f284abec5d4f6d77cbca9c52c39b554397eb7de7d2fcf440bcd049/grpcio-1.73.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:8f5a6df3fba31a3485096ac85b2e34b9666ffb0590df0cd044f58694e6a1f6b5", size = 10595497 },
{ url = "https://files.pythonhosted.org/packages/a4/75/bff2c2728018f546d812b755455014bc718f8cdcbf5c84f1f6e5494443a8/grpcio-1.73.1-cp313-cp313-manylinux_2_17_aarch64.whl", hash = "sha256:052e28fe9c41357da42250a91926a3e2f74c046575c070b69659467ca5aa976b", size = 5765321 },
{ url = "https://files.pythonhosted.org/packages/70/3b/14e43158d3b81a38251b1d231dfb45a9b492d872102a919fbf7ba4ac20cd/grpcio-1.73.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c0bf15f629b1497436596b1cbddddfa3234273490229ca29561209778ebe182", size = 6415436 },
{ url = "https://files.pythonhosted.org/packages/e5/3f/81d9650ca40b54338336fd360f36773be8cb6c07c036e751d8996eb96598/grpcio-1.73.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ab860d5bfa788c5a021fba264802e2593688cd965d1374d31d2b1a34cacd854", size = 6007012 },
{ url = "https://files.pythonhosted.org/packages/55/f4/59edf5af68d684d0f4f7ad9462a418ac517201c238551529098c9aa28cb0/grpcio-1.73.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:ad1d958c31cc91ab050bd8a91355480b8e0683e21176522bacea225ce51163f2", size = 6105209 },
{ url = "https://files.pythonhosted.org/packages/e4/a8/700d034d5d0786a5ba14bfa9ce974ed4c976936c2748c2bd87aa50f69b36/grpcio-1.73.1-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:f43ffb3bd415c57224c7427bfb9e6c46a0b6e998754bfa0d00f408e1873dcbb5", size = 6753655 },
{ url = "https://files.pythonhosted.org/packages/1f/29/efbd4ac837c23bc48e34bbaf32bd429f0dc9ad7f80721cdb4622144c118c/grpcio-1.73.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:686231cdd03a8a8055f798b2b54b19428cdf18fa1549bee92249b43607c42668", size = 6287288 },
{ url = "https://files.pythonhosted.org/packages/d8/61/c6045d2ce16624bbe18b5d169c1a5ce4d6c3a47bc9d0e5c4fa6a50ed1239/grpcio-1.73.1-cp313-cp313-win32.whl", hash = "sha256:89018866a096e2ce21e05eabed1567479713ebe57b1db7cbb0f1e3b896793ba4", size = 3668151 },
{ url = "https://files.pythonhosted.org/packages/c2/d7/77ac689216daee10de318db5aa1b88d159432dc76a130948a56b3aa671a2/grpcio-1.73.1-cp313-cp313-win_amd64.whl", hash = "sha256:4a68f8c9966b94dff693670a5cf2b54888a48a5011c5d9ce2295a1a1465ee84f", size = 4335747 },
]
[[package]]
name = "grpcio-status"
version = "1.73.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "googleapis-common-protos" },
{ name = "grpcio" },
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f6/59/9350a13804f2e407d76b3962c548e023639fc1545056e342c6bad0d4fd30/grpcio_status-1.73.1.tar.gz", hash = "sha256:928f49ccf9688db5f20cd9e45c4578a1d01ccca29aeaabf066f2ac76aa886668", size = 13664 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2e/50/ee32e6073e2c3a4457be168e2bbf84d02ad9d2c18c4a578a641480c293d4/grpcio_status-1.73.1-py3-none-any.whl", hash = "sha256:538595c32a6c819c32b46a621a51e9ae4ffcd7e7e1bce35f728ef3447e9809b6", size = 14422 },
]
[[package]]
name = "h11"
version = "0.16.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515 },
]
[[package]]
name = "httpcore"
version = "1.0.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "h11" },
]
sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784 },
]
[[package]]
name = "httpx"
version = "0.28.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "certifi" },
{ name = "httpcore" },
{ name = "idna" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517 },
]
[[package]]
name = "idna"
version = "3.10"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442 },
]
[[package]]
name = "iniconfig"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050 },
]
[[package]]
name = "jiter"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ee/9d/ae7ddb4b8ab3fb1b51faf4deb36cb48a4fbbd7cb36bad6a5fca4741306f7/jiter-0.10.0.tar.gz", hash = "sha256:07a7142c38aacc85194391108dc91b5b57093c978a9932bd86a36862759d9500", size = 162759 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1b/dd/6cefc6bd68b1c3c979cecfa7029ab582b57690a31cd2f346c4d0ce7951b6/jiter-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:3bebe0c558e19902c96e99217e0b8e8b17d570906e72ed8a87170bc290b1e978", size = 317473 },
{ url = "https://files.pythonhosted.org/packages/be/cf/fc33f5159ce132be1d8dd57251a1ec7a631c7df4bd11e1cd198308c6ae32/jiter-0.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:558cc7e44fd8e507a236bee6a02fa17199ba752874400a0ca6cd6e2196cdb7dc", size = 321971 },
{ url = "https://files.pythonhosted.org/packages/68/a4/da3f150cf1d51f6c472616fb7650429c7ce053e0c962b41b68557fdf6379/jiter-0.10.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d613e4b379a07d7c8453c5712ce7014e86c6ac93d990a0b8e7377e18505e98d", size = 345574 },
{ url = "https://files.pythonhosted.org/packages/84/34/6e8d412e60ff06b186040e77da5f83bc158e9735759fcae65b37d681f28b/jiter-0.10.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f62cf8ba0618eda841b9bf61797f21c5ebd15a7a1e19daab76e4e4b498d515b2", size = 371028 },
{ url = "https://files.pythonhosted.org/packages/fb/d9/9ee86173aae4576c35a2f50ae930d2ccb4c4c236f6cb9353267aa1d626b7/jiter-0.10.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:919d139cdfa8ae8945112398511cb7fca58a77382617d279556b344867a37e61", size = 491083 },
{ url = "https://files.pythonhosted.org/packages/d9/2c/f955de55e74771493ac9e188b0f731524c6a995dffdcb8c255b89c6fb74b/jiter-0.10.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13ddbc6ae311175a3b03bd8994881bc4635c923754932918e18da841632349db", size = 388821 },
{ url = "https://files.pythonhosted.org/packages/81/5a/0e73541b6edd3f4aada586c24e50626c7815c561a7ba337d6a7eb0a915b4/jiter-0.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c440ea003ad10927a30521a9062ce10b5479592e8a70da27f21eeb457b4a9c5", size = 352174 },
{ url = "https://files.pythonhosted.org/packages/1c/c0/61eeec33b8c75b31cae42be14d44f9e6fe3ac15a4e58010256ac3abf3638/jiter-0.10.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dc347c87944983481e138dea467c0551080c86b9d21de6ea9306efb12ca8f606", size = 391869 },
{ url = "https://files.pythonhosted.org/packages/41/22/5beb5ee4ad4ef7d86f5ea5b4509f680a20706c4a7659e74344777efb7739/jiter-0.10.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:13252b58c1f4d8c5b63ab103c03d909e8e1e7842d302473f482915d95fefd605", size = 523741 },
{ url = "https://files.pythonhosted.org/packages/ea/10/768e8818538e5817c637b0df52e54366ec4cebc3346108a4457ea7a98f32/jiter-0.10.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7d1bbf3c465de4a24ab12fb7766a0003f6f9bce48b8b6a886158c4d569452dc5", size = 514527 },
{ url = "https://files.pythonhosted.org/packages/73/6d/29b7c2dc76ce93cbedabfd842fc9096d01a0550c52692dfc33d3cc889815/jiter-0.10.0-cp311-cp311-win32.whl", hash = "sha256:db16e4848b7e826edca4ccdd5b145939758dadf0dc06e7007ad0e9cfb5928ae7", size = 210765 },
{ url = "https://files.pythonhosted.org/packages/c2/c9/d394706deb4c660137caf13e33d05a031d734eb99c051142e039d8ceb794/jiter-0.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c9c1d5f10e18909e993f9641f12fe1c77b3e9b533ee94ffa970acc14ded3812", size = 209234 },
{ url = "https://files.pythonhosted.org/packages/6d/b5/348b3313c58f5fbfb2194eb4d07e46a35748ba6e5b3b3046143f3040bafa/jiter-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:1e274728e4a5345a6dde2d343c8da018b9d4bd4350f5a472fa91f66fda44911b", size = 312262 },
{ url = "https://files.pythonhosted.org/packages/9c/4a/6a2397096162b21645162825f058d1709a02965606e537e3304b02742e9b/jiter-0.10.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7202ae396446c988cb2a5feb33a543ab2165b786ac97f53b59aafb803fef0744", size = 320124 },
{ url = "https://files.pythonhosted.org/packages/2a/85/1ce02cade7516b726dd88f59a4ee46914bf79d1676d1228ef2002ed2f1c9/jiter-0.10.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23ba7722d6748b6920ed02a8f1726fb4b33e0fd2f3f621816a8b486c66410ab2", size = 345330 },
{ url = "https://files.pythonhosted.org/packages/75/d0/bb6b4f209a77190ce10ea8d7e50bf3725fc16d3372d0a9f11985a2b23eff/jiter-0.10.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:371eab43c0a288537d30e1f0b193bc4eca90439fc08a022dd83e5e07500ed026", size = 369670 },
{ url = "https://files.pythonhosted.org/packages/a0/f5/a61787da9b8847a601e6827fbc42ecb12be2c925ced3252c8ffcb56afcaf/jiter-0.10.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6c675736059020365cebc845a820214765162728b51ab1e03a1b7b3abb70f74c", size = 489057 },
{ url = "https://files.pythonhosted.org/packages/12/e4/6f906272810a7b21406c760a53aadbe52e99ee070fc5c0cb191e316de30b/jiter-0.10.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0c5867d40ab716e4684858e4887489685968a47e3ba222e44cde6e4a2154f959", size = 389372 },
{ url = "https://files.pythonhosted.org/packages/e2/ba/77013b0b8ba904bf3762f11e0129b8928bff7f978a81838dfcc958ad5728/jiter-0.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:395bb9a26111b60141757d874d27fdea01b17e8fac958b91c20128ba8f4acc8a", size = 352038 },
{ url = "https://files.pythonhosted.org/packages/67/27/c62568e3ccb03368dbcc44a1ef3a423cb86778a4389e995125d3d1aaa0a4/jiter-0.10.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6842184aed5cdb07e0c7e20e5bdcfafe33515ee1741a6835353bb45fe5d1bd95", size = 391538 },
{ url = "https://files.pythonhosted.org/packages/c0/72/0d6b7e31fc17a8fdce76164884edef0698ba556b8eb0af9546ae1a06b91d/jiter-0.10.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:62755d1bcea9876770d4df713d82606c8c1a3dca88ff39046b85a048566d56ea", size = 523557 },
{ url = "https://files.pythonhosted.org/packages/2f/09/bc1661fbbcbeb6244bd2904ff3a06f340aa77a2b94e5a7373fd165960ea3/jiter-0.10.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:533efbce2cacec78d5ba73a41756beff8431dfa1694b6346ce7af3a12c42202b", size = 514202 },
{ url = "https://files.pythonhosted.org/packages/1b/84/5a5d5400e9d4d54b8004c9673bbe4403928a00d28529ff35b19e9d176b19/jiter-0.10.0-cp312-cp312-win32.whl", hash = "sha256:8be921f0cadd245e981b964dfbcd6fd4bc4e254cdc069490416dd7a2632ecc01", size = 211781 },
{ url = "https://files.pythonhosted.org/packages/9b/52/7ec47455e26f2d6e5f2ea4951a0652c06e5b995c291f723973ae9e724a65/jiter-0.10.0-cp312-cp312-win_amd64.whl", hash = "sha256:a7c7d785ae9dda68c2678532a5a1581347e9c15362ae9f6e68f3fdbfb64f2e49", size = 206176 },
{ url = "https://files.pythonhosted.org/packages/2e/b0/279597e7a270e8d22623fea6c5d4eeac328e7d95c236ed51a2b884c54f70/jiter-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:e0588107ec8e11b6f5ef0e0d656fb2803ac6cf94a96b2b9fc675c0e3ab5e8644", size = 311617 },
{ url = "https://files.pythonhosted.org/packages/91/e3/0916334936f356d605f54cc164af4060e3e7094364add445a3bc79335d46/jiter-0.10.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cafc4628b616dc32530c20ee53d71589816cf385dd9449633e910d596b1f5c8a", size = 318947 },
{ url = "https://files.pythonhosted.org/packages/6a/8e/fd94e8c02d0e94539b7d669a7ebbd2776e51f329bb2c84d4385e8063a2ad/jiter-0.10.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:520ef6d981172693786a49ff5b09eda72a42e539f14788124a07530f785c3ad6", size = 344618 },
{ url = "https://files.pythonhosted.org/packages/6f/b0/f9f0a2ec42c6e9c2e61c327824687f1e2415b767e1089c1d9135f43816bd/jiter-0.10.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:554dedfd05937f8fc45d17ebdf298fe7e0c77458232bcb73d9fbbf4c6455f5b3", size = 368829 },
{ url = "https://files.pythonhosted.org/packages/e8/57/5bbcd5331910595ad53b9fd0c610392ac68692176f05ae48d6ce5c852967/jiter-0.10.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5bc299da7789deacf95f64052d97f75c16d4fc8c4c214a22bf8d859a4288a1c2", size = 491034 },
{ url = "https://files.pythonhosted.org/packages/9b/be/c393df00e6e6e9e623a73551774449f2f23b6ec6a502a3297aeeece2c65a/jiter-0.10.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5161e201172de298a8a1baad95eb85db4fb90e902353b1f6a41d64ea64644e25", size = 388529 },
{ url = "https://files.pythonhosted.org/packages/42/3e/df2235c54d365434c7f150b986a6e35f41ebdc2f95acea3036d99613025d/jiter-0.10.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e2227db6ba93cb3e2bf67c87e594adde0609f146344e8207e8730364db27041", size = 350671 },
{ url = "https://files.pythonhosted.org/packages/c6/77/71b0b24cbcc28f55ab4dbfe029f9a5b73aeadaba677843fc6dc9ed2b1d0a/jiter-0.10.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:15acb267ea5e2c64515574b06a8bf393fbfee6a50eb1673614aa45f4613c0cca", size = 390864 },
{ url = "https://files.pythonhosted.org/packages/6a/d3/ef774b6969b9b6178e1d1e7a89a3bd37d241f3d3ec5f8deb37bbd203714a/jiter-0.10.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:901b92f2e2947dc6dfcb52fd624453862e16665ea909a08398dde19c0731b7f4", size = 522989 },
{ url = "https://files.pythonhosted.org/packages/0c/41/9becdb1d8dd5d854142f45a9d71949ed7e87a8e312b0bede2de849388cb9/jiter-0.10.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d0cb9a125d5a3ec971a094a845eadde2db0de85b33c9f13eb94a0c63d463879e", size = 513495 },
{ url = "https://files.pythonhosted.org/packages/9c/36/3468e5a18238bdedae7c4d19461265b5e9b8e288d3f86cd89d00cbb48686/jiter-0.10.0-cp313-cp313-win32.whl", hash = "sha256:48a403277ad1ee208fb930bdf91745e4d2d6e47253eedc96e2559d1e6527006d", size = 211289 },
{ url = "https://files.pythonhosted.org/packages/7e/07/1c96b623128bcb913706e294adb5f768fb7baf8db5e1338ce7b4ee8c78ef/jiter-0.10.0-cp313-cp313-win_amd64.whl", hash = "sha256:75f9eb72ecb640619c29bf714e78c9c46c9c4eaafd644bf78577ede459f330d4", size = 205074 },
{ url = "https://files.pythonhosted.org/packages/54/46/caa2c1342655f57d8f0f2519774c6d67132205909c65e9aa8255e1d7b4f4/jiter-0.10.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:28ed2a4c05a1f32ef0e1d24c2611330219fed727dae01789f4a335617634b1ca", size = 318225 },
{ url = "https://files.pythonhosted.org/packages/43/84/c7d44c75767e18946219ba2d703a5a32ab37b0bc21886a97bc6062e4da42/jiter-0.10.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14a4c418b1ec86a195f1ca69da8b23e8926c752b685af665ce30777233dfe070", size = 350235 },
{ url = "https://files.pythonhosted.org/packages/01/16/f5a0135ccd968b480daad0e6ab34b0c7c5ba3bc447e5088152696140dcb3/jiter-0.10.0-cp313-cp313t-win_amd64.whl", hash = "sha256:d7bfed2fe1fe0e4dda6ef682cee888ba444b21e7a6553e03252e4feb6cf0adca", size = 207278 },
{ url = "https://files.pythonhosted.org/packages/1c/9b/1d646da42c3de6c2188fdaa15bce8ecb22b635904fc68be025e21249ba44/jiter-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:5e9251a5e83fab8d87799d3e1a46cb4b7f2919b895c6f4483629ed2446f66522", size = 310866 },
{ url = "https://files.pythonhosted.org/packages/ad/0e/26538b158e8a7c7987e94e7aeb2999e2e82b1f9d2e1f6e9874ddf71ebda0/jiter-0.10.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:023aa0204126fe5b87ccbcd75c8a0d0261b9abdbbf46d55e7ae9f8e22424eeb8", size = 318772 },
{ url = "https://files.pythonhosted.org/packages/7b/fb/d302893151caa1c2636d6574d213e4b34e31fd077af6050a9c5cbb42f6fb/jiter-0.10.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c189c4f1779c05f75fc17c0c1267594ed918996a231593a21a5ca5438445216", size = 344534 },
{ url = "https://files.pythonhosted.org/packages/01/d8/5780b64a149d74e347c5128d82176eb1e3241b1391ac07935693466d6219/jiter-0.10.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:15720084d90d1098ca0229352607cd68256c76991f6b374af96f36920eae13c4", size = 369087 },
{ url = "https://files.pythonhosted.org/packages/e8/5b/f235a1437445160e777544f3ade57544daf96ba7e96c1a5b24a6f7ac7004/jiter-0.10.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4f2fb68e5f1cfee30e2b2a09549a00683e0fde4c6a2ab88c94072fc33cb7426", size = 490694 },
{ url = "https://files.pythonhosted.org/packages/85/a9/9c3d4617caa2ff89cf61b41e83820c27ebb3f7b5fae8a72901e8cd6ff9be/jiter-0.10.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ce541693355fc6da424c08b7edf39a2895f58d6ea17d92cc2b168d20907dee12", size = 388992 },
{ url = "https://files.pythonhosted.org/packages/68/b1/344fd14049ba5c94526540af7eb661871f9c54d5f5601ff41a959b9a0bbd/jiter-0.10.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31c50c40272e189d50006ad5c73883caabb73d4e9748a688b216e85a9a9ca3b9", size = 351723 },
{ url = "https://files.pythonhosted.org/packages/41/89/4c0e345041186f82a31aee7b9d4219a910df672b9fef26f129f0cda07a29/jiter-0.10.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fa3402a2ff9815960e0372a47b75c76979d74402448509ccd49a275fa983ef8a", size = 392215 },
{ url = "https://files.pythonhosted.org/packages/55/58/ee607863e18d3f895feb802154a2177d7e823a7103f000df182e0f718b38/jiter-0.10.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:1956f934dca32d7bb647ea21d06d93ca40868b505c228556d3373cbd255ce853", size = 522762 },
{ url = "https://files.pythonhosted.org/packages/15/d0/9123fb41825490d16929e73c212de9a42913d68324a8ce3c8476cae7ac9d/jiter-0.10.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:fcedb049bdfc555e261d6f65a6abe1d5ad68825b7202ccb9692636c70fcced86", size = 513427 },
{ url = "https://files.pythonhosted.org/packages/d8/b3/2bd02071c5a2430d0b70403a34411fc519c2f227da7b03da9ba6a956f931/jiter-0.10.0-cp314-cp314-win32.whl", hash = "sha256:ac509f7eccca54b2a29daeb516fb95b6f0bd0d0d8084efaf8ed5dfc7b9f0b357", size = 210127 },
{ url = "https://files.pythonhosted.org/packages/03/0c/5fe86614ea050c3ecd728ab4035534387cd41e7c1855ef6c031f1ca93e3f/jiter-0.10.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5ed975b83a2b8639356151cef5c0d597c68376fc4922b45d0eb384ac058cfa00", size = 318527 },
{ url = "https://files.pythonhosted.org/packages/b3/4a/4175a563579e884192ba6e81725fc0448b042024419be8d83aa8a80a3f44/jiter-0.10.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3aa96f2abba33dc77f79b4cf791840230375f9534e5fac927ccceb58c5e604a5", size = 354213 },
]
[[package]]
name = "learn-indonesian-backend"
version = "0.1.0"
source = { editable = "." }
dependencies = [
{ name = "fastapi" },
{ name = "google-cloud-speech" },
{ name = "google-cloud-texttospeech" },
{ name = "openai" },
{ name = "pydantic" },
{ name = "python-dotenv" },
{ name = "python-multipart" },
{ name = "uvicorn" },
{ name = "websockets" },
]
[package.dev-dependencies]
dev = [
{ name = "httpx" },
{ name = "pytest" },
{ name = "pytest-asyncio" },
{ name = "ruff" },
]
[package.metadata]
requires-dist = [
{ name = "fastapi", specifier = ">=0.104.1" },
{ name = "google-cloud-speech", specifier = ">=2.21.0" },
{ name = "google-cloud-texttospeech", specifier = ">=2.14.2" },
{ name = "openai", specifier = ">=1.0.0" },
{ name = "pydantic", specifier = ">=2.5.0" },
{ name = "python-dotenv", specifier = ">=1.0.0" },
{ name = "python-multipart", specifier = ">=0.0.6" },
{ name = "uvicorn", specifier = ">=0.24.0" },
{ name = "websockets", specifier = ">=11.0.3" },
]
[package.metadata.requires-dev]
dev = [
{ name = "httpx", specifier = ">=0.25.2" },
{ name = "pytest", specifier = ">=7.4.3" },
{ name = "pytest-asyncio", specifier = ">=0.21.1" },
{ name = "ruff", specifier = ">=0.1.6" },
]
[[package]]
name = "openai"
version = "1.95.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "distro" },
{ name = "httpx" },
{ name = "jiter" },
{ name = "pydantic" },
{ name = "sniffio" },
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a1/a3/70cd57c7d71086c532ce90de5fdef4165dc6ae9dbf346da6737ff9ebafaa/openai-1.95.1.tar.gz", hash = "sha256:f089b605282e2a2b6776090b4b46563ac1da77f56402a222597d591e2dcc1086", size = 488271 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/02/1d/0432ea635097f4dbb34641a3650803d8a4aa29d06bafc66583bf1adcceb4/openai-1.95.1-py3-none-any.whl", hash = "sha256:8bbdfeceef231b1ddfabbc232b179d79f8b849aab5a7da131178f8d10e0f162f", size = 755613 },
]
[[package]]
name = "packaging"
version = "25.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469 },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538 },
]
[[package]]
name = "proto-plus"
version = "1.26.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f4/ac/87285f15f7cce6d4a008f33f1757fb5a13611ea8914eb58c3d0d26243468/proto_plus-1.26.1.tar.gz", hash = "sha256:21a515a4c4c0088a773899e23c7bbade3d18f9c66c73edd4c7ee3816bc96a012", size = 56142 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4e/6d/280c4c2ce28b1593a19ad5239c8b826871fc6ec275c21afc8e1820108039/proto_plus-1.26.1-py3-none-any.whl", hash = "sha256:13285478c2dcf2abb829db158e1047e2f1e8d63a077d94263c2b88b043c75a66", size = 50163 },
]
[[package]]
name = "protobuf"
version = "6.31.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/52/f3/b9655a711b32c19720253f6f06326faf90580834e2e83f840472d752bc8b/protobuf-6.31.1.tar.gz", hash = "sha256:d8cac4c982f0b957a4dc73a80e2ea24fab08e679c0de9deb835f4a12d69aca9a", size = 441797 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f3/6f/6ab8e4bf962fd5570d3deaa2d5c38f0a363f57b4501047b5ebeb83ab1125/protobuf-6.31.1-cp310-abi3-win32.whl", hash = "sha256:7fa17d5a29c2e04b7d90e5e32388b8bfd0e7107cd8e616feef7ed3fa6bdab5c9", size = 423603 },
{ url = "https://files.pythonhosted.org/packages/44/3a/b15c4347dd4bf3a1b0ee882f384623e2063bb5cf9fa9d57990a4f7df2fb6/protobuf-6.31.1-cp310-abi3-win_amd64.whl", hash = "sha256:426f59d2964864a1a366254fa703b8632dcec0790d8862d30034d8245e1cd447", size = 435283 },
{ url = "https://files.pythonhosted.org/packages/6a/c9/b9689a2a250264a84e66c46d8862ba788ee7a641cdca39bccf64f59284b7/protobuf-6.31.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:6f1227473dc43d44ed644425268eb7c2e488ae245d51c6866d19fe158e207402", size = 425604 },
{ url = "https://files.pythonhosted.org/packages/76/a1/7a5a94032c83375e4fe7e7f56e3976ea6ac90c5e85fac8576409e25c39c3/protobuf-6.31.1-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:a40fc12b84c154884d7d4c4ebd675d5b3b5283e155f324049ae396b95ddebc39", size = 322115 },
{ url = "https://files.pythonhosted.org/packages/fa/b1/b59d405d64d31999244643d88c45c8241c58f17cc887e73bcb90602327f8/protobuf-6.31.1-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:4ee898bf66f7a8b0bd21bce523814e6fbd8c6add948045ce958b73af7e8878c6", size = 321070 },
{ url = "https://files.pythonhosted.org/packages/f7/af/ab3c51ab7507a7325e98ffe691d9495ee3d3aa5f589afad65ec920d39821/protobuf-6.31.1-py3-none-any.whl", hash = "sha256:720a6c7e6b77288b85063569baae8536671b39f15cc22037ec7045658d80489e", size = 168724 },
]
[[package]]
name = "pyasn1"
version = "0.6.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ba/e9/01f1a64245b89f039897cb0130016d79f77d52669aae6ee7b159a6c4c018/pyasn1-0.6.1.tar.gz", hash = "sha256:6f580d2bdd84365380830acf45550f2511469f673cb4a5ae3857a3170128b034", size = 145322 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c8/f1/d6a797abb14f6283c0ddff96bbdd46937f64122b8c925cab503dd37f8214/pyasn1-0.6.1-py3-none-any.whl", hash = "sha256:0d632f46f2ba09143da3a8afe9e33fb6f92fa2320ab7e886e2d0f7672af84629", size = 83135 },
]
[[package]]
name = "pyasn1-modules"
version = "0.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyasn1" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/e6/78ebbb10a8c8e4b61a59249394a4a594c1a7af95593dc933a349c8d00964/pyasn1_modules-0.4.2.tar.gz", hash = "sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6", size = 307892 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/47/8d/d529b5d697919ba8c11ad626e835d4039be708a35b0d22de83a269a6682c/pyasn1_modules-0.4.2-py3-none-any.whl", hash = "sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a", size = 181259 },
]
[[package]]
name = "pydantic"
version = "2.11.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
{ name = "pydantic-core" },
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/00/dd/4325abf92c39ba8623b5af936ddb36ffcfe0beae70405d456ab1fb2f5b8c/pydantic-2.11.7.tar.gz", hash = "sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db", size = 788350 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6a/c0/ec2b1c8712ca690e5d61979dee872603e92b8a32f94cc1b72d53beab008a/pydantic-2.11.7-py3-none-any.whl", hash = "sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b", size = 444782 },
]
[[package]]
name = "pydantic-core"
version = "2.33.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584 },
{ url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071 },
{ url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823 },
{ url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792 },
{ url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338 },
{ url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998 },
{ url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200 },
{ url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890 },
{ url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359 },
{ url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883 },
{ url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074 },
{ url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538 },
{ url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909 },
{ url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786 },
{ url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000 },
{ url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996 },
{ url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957 },
{ url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199 },
{ url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296 },
{ url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109 },
{ url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028 },
{ url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044 },
{ url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881 },
{ url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034 },
{ url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187 },
{ url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628 },
{ url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866 },
{ url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894 },
{ url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688 },
{ url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808 },
{ url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580 },
{ url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859 },
{ url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810 },
{ url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498 },
{ url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611 },
{ url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924 },
{ url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196 },
{ url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389 },
{ url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223 },
{ url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473 },
{ url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269 },
{ url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921 },
{ url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162 },
{ url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560 },
{ url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777 },
{ url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200 },
{ url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123 },
{ url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852 },
{ url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484 },
{ url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896 },
{ url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475 },
{ url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013 },
{ url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715 },
{ url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757 },
]
[[package]]
name = "pygments"
version = "2.19.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217 },
]
[[package]]
name = "pytest"
version = "8.4.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/08/ba/45911d754e8eba3d5a841a5ce61a65a685ff1798421ac054f85aa8747dfb/pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c", size = 1517714 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/29/16/c8a903f4c4dffe7a12843191437d7cd8e32751d5de349d45d3fe69544e87/pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7", size = 365474 },
]
[[package]]
name = "pytest-asyncio"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d0/d4/14f53324cb1a6381bef29d698987625d80052bb33932d8e7cbf9b337b17c/pytest_asyncio-1.0.0.tar.gz", hash = "sha256:d15463d13f4456e1ead2594520216b225a16f781e144f8fdf6c5bb4667c48b3f", size = 46960 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/30/05/ce271016e351fddc8399e546f6e23761967ee09c8c568bbfbecb0c150171/pytest_asyncio-1.0.0-py3-none-any.whl", hash = "sha256:4f024da9f1ef945e680dc68610b52550e36590a67fd31bb3b4943979a1f90ef3", size = 15976 },
]
[[package]]
name = "python-dotenv"
version = "1.1.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f6/b0/4bc07ccd3572a2f9df7e6782f52b0c6c90dcbb803ac4a167702d7d0dfe1e/python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab", size = 41978 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc", size = 20556 },
]
[[package]]
name = "python-multipart"
version = "0.0.20"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f3/87/f44d7c9f274c7ee665a29b885ec97089ec5dc034c7f3fafa03da9e39a09e/python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13", size = 37158 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/58/38b5afbc1a800eeea951b9285d3912613f2603bdf897a4ab0f4bd7f405fc/python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104", size = 24546 },
]
[[package]]
name = "requests"
version = "2.32.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "charset-normalizer" },
{ name = "idna" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e1/0a/929373653770d8a0d7ea76c37de6e41f11eb07559b103b1c02cafb3f7cf8/requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422", size = 135258 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7c/e4/56027c4a6b4ae70ca9de302488c5ca95ad4a39e190093d6c1a8ace08341b/requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c", size = 64847 },
]
[[package]]
name = "rsa"
version = "4.9.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyasn1" },
]
sdist = { url = "https://files.pythonhosted.org/packages/da/8a/22b7beea3ee0d44b1916c0c1cb0ee3af23b700b6da9f04991899d0c555d4/rsa-4.9.1.tar.gz", hash = "sha256:e7bdbfdb5497da4c07dfd35530e1a902659db6ff241e39d9953cad06ebd0ae75", size = 29034 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696 },
]
[[package]]
name = "ruff"
version = "0.12.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/2a/43955b530c49684d3c38fcda18c43caf91e99204c2a065552528e0552d4f/ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77", size = 4459341 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e2/fd/b44c5115539de0d598d75232a1cc7201430b6891808df111b8b0506aae43/ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2", size = 10430499 },
{ url = "https://files.pythonhosted.org/packages/43/c5/9eba4f337970d7f639a37077be067e4ec80a2ad359e4cc6c5b56805cbc66/ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041", size = 11213413 },
{ url = "https://files.pythonhosted.org/packages/e2/2c/fac3016236cf1fe0bdc8e5de4f24c76ce53c6dd9b5f350d902549b7719b2/ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882", size = 10586941 },
{ url = "https://files.pythonhosted.org/packages/c5/0f/41fec224e9dfa49a139f0b402ad6f5d53696ba1800e0f77b279d55210ca9/ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901", size = 10783001 },
{ url = "https://files.pythonhosted.org/packages/0d/ca/dd64a9ce56d9ed6cad109606ac014860b1c217c883e93bf61536400ba107/ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0", size = 10269641 },
{ url = "https://files.pythonhosted.org/packages/63/5c/2be545034c6bd5ce5bb740ced3e7014d7916f4c445974be11d2a406d5088/ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6", size = 11875059 },
{ url = "https://files.pythonhosted.org/packages/8e/d4/a74ef1e801ceb5855e9527dae105eaff136afcb9cc4d2056d44feb0e4792/ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc", size = 12658890 },
{ url = "https://files.pythonhosted.org/packages/13/c8/1057916416de02e6d7c9bcd550868a49b72df94e3cca0aeb77457dcd9644/ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687", size = 12232008 },
{ url = "https://files.pythonhosted.org/packages/f5/59/4f7c130cc25220392051fadfe15f63ed70001487eca21d1796db46cbcc04/ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e", size = 11499096 },
{ url = "https://files.pythonhosted.org/packages/d4/01/a0ad24a5d2ed6be03a312e30d32d4e3904bfdbc1cdbe63c47be9d0e82c79/ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311", size = 11688307 },
{ url = "https://files.pythonhosted.org/packages/93/72/08f9e826085b1f57c9a0226e48acb27643ff19b61516a34c6cab9d6ff3fa/ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07", size = 10661020 },
{ url = "https://files.pythonhosted.org/packages/80/a0/68da1250d12893466c78e54b4a0ff381370a33d848804bb51279367fc688/ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12", size = 10246300 },
{ url = "https://files.pythonhosted.org/packages/6a/22/5f0093d556403e04b6fd0984fc0fb32fbb6f6ce116828fd54306a946f444/ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b", size = 11263119 },
{ url = "https://files.pythonhosted.org/packages/92/c9/f4c0b69bdaffb9968ba40dd5fa7df354ae0c73d01f988601d8fac0c639b1/ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f", size = 11746990 },
{ url = "https://files.pythonhosted.org/packages/fe/84/7cc7bd73924ee6be4724be0db5414a4a2ed82d06b30827342315a1be9e9c/ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d", size = 10589263 },
{ url = "https://files.pythonhosted.org/packages/07/87/c070f5f027bd81f3efee7d14cb4d84067ecf67a3a8efb43aadfc72aa79a6/ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7", size = 11695072 },
{ url = "https://files.pythonhosted.org/packages/e0/30/f3eaf6563c637b6e66238ed6535f6775480db973c836336e4122161986fc/ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1", size = 10805855 },
]
[[package]]
name = "sniffio"
version = "1.3.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235 },
]
[[package]]
name = "starlette"
version = "0.47.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0a/69/662169fdb92fb96ec3eaee218cf540a629d629c86d7993d9651226a6789b/starlette-0.47.1.tar.gz", hash = "sha256:aef012dd2b6be325ffa16698f9dc533614fb1cebd593a906b90dc1025529a79b", size = 2583072 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/82/95/38ef0cd7fa11eaba6a99b3c4f5ac948d8bc6ff199aabd327a29cc000840c/starlette-0.47.1-py3-none-any.whl", hash = "sha256:5e11c9f5c7c3f24959edbf2dffdc01bba860228acf657129467d8a7468591527", size = 72747 },
]
[[package]]
name = "tqdm"
version = "4.67.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540 },
]
[[package]]
name = "typing-extensions"
version = "4.14.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/98/5a/da40306b885cc8c09109dc2e1abd358d5684b1425678151cdaed4731c822/typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36", size = 107673 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b5/00/d631e67a838026495268c2f6884f3711a15a9a2a96cd244fdaea53b823fb/typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76", size = 43906 },
]
[[package]]
name = "typing-inspection"
version = "0.4.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f8/b1/0c11f5058406b3af7609f121aaa6b609744687f1d158b3c3a5bf4cc94238/typing_inspection-0.4.1.tar.gz", hash = "sha256:6ae134cc0203c33377d43188d4064e9b357dba58cff3185f22924610e70a9d28", size = 75726 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/17/69/cd203477f944c353c31bade965f880aa1061fd6bf05ded0726ca845b6ff7/typing_inspection-0.4.1-py3-none-any.whl", hash = "sha256:389055682238f53b04f7badcb49b989835495a96700ced5dab2d8feae4b26f51", size = 14552 },
]
[[package]]
name = "urllib3"
version = "2.5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795 },
]
[[package]]
name = "uvicorn"
version = "0.35.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "h11" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5e/42/e0e305207bb88c6b8d3061399c6a961ffe5fbb7e2aa63c9234df7259e9cd/uvicorn-0.35.0.tar.gz", hash = "sha256:bc662f087f7cf2ce11a1d7fd70b90c9f98ef2e2831556dd078d131b96cc94a01", size = 78473 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d2/e2/dc81b1bd1dcfe91735810265e9d26bc8ec5da45b4c0f6237e286819194c3/uvicorn-0.35.0-py3-none-any.whl", hash = "sha256:197535216b25ff9b785e29a0b79199f55222193d47f820816e7da751e9bc8d4a", size = 66406 },
]
[[package]]
name = "websockets"
version = "15.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/21/e6/26d09fab466b7ca9c7737474c52be4f76a40301b08362eb2dbc19dcc16c1/websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee", size = 177016 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/32/18fcd5919c293a398db67443acd33fde142f283853076049824fc58e6f75/websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431", size = 175423 },
{ url = "https://files.pythonhosted.org/packages/76/70/ba1ad96b07869275ef42e2ce21f07a5b0148936688c2baf7e4a1f60d5058/websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57", size = 173082 },
{ url = "https://files.pythonhosted.org/packages/86/f2/10b55821dd40eb696ce4704a87d57774696f9451108cff0d2824c97e0f97/websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905", size = 173330 },
{ url = "https://files.pythonhosted.org/packages/a5/90/1c37ae8b8a113d3daf1065222b6af61cc44102da95388ac0018fcb7d93d9/websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562", size = 182878 },
{ url = "https://files.pythonhosted.org/packages/8e/8d/96e8e288b2a41dffafb78e8904ea7367ee4f891dafc2ab8d87e2124cb3d3/websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792", size = 181883 },
{ url = "https://files.pythonhosted.org/packages/93/1f/5d6dbf551766308f6f50f8baf8e9860be6182911e8106da7a7f73785f4c4/websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413", size = 182252 },
{ url = "https://files.pythonhosted.org/packages/d4/78/2d4fed9123e6620cbf1706c0de8a1632e1a28e7774d94346d7de1bba2ca3/websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8", size = 182521 },
{ url = "https://files.pythonhosted.org/packages/e7/3b/66d4c1b444dd1a9823c4a81f50231b921bab54eee2f69e70319b4e21f1ca/websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3", size = 181958 },
{ url = "https://files.pythonhosted.org/packages/08/ff/e9eed2ee5fed6f76fdd6032ca5cd38c57ca9661430bb3d5fb2872dc8703c/websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf", size = 181918 },
{ url = "https://files.pythonhosted.org/packages/d8/75/994634a49b7e12532be6a42103597b71098fd25900f7437d6055ed39930a/websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85", size = 176388 },
{ url = "https://files.pythonhosted.org/packages/98/93/e36c73f78400a65f5e236cd376713c34182e6663f6889cd45a4a04d8f203/websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065", size = 176828 },
{ url = "https://files.pythonhosted.org/packages/51/6b/4545a0d843594f5d0771e86463606a3988b5a09ca5123136f8a76580dd63/websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3", size = 175437 },
{ url = "https://files.pythonhosted.org/packages/f4/71/809a0f5f6a06522af902e0f2ea2757f71ead94610010cf570ab5c98e99ed/websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665", size = 173096 },
{ url = "https://files.pythonhosted.org/packages/3d/69/1a681dd6f02180916f116894181eab8b2e25b31e484c5d0eae637ec01f7c/websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2", size = 173332 },
{ url = "https://files.pythonhosted.org/packages/a6/02/0073b3952f5bce97eafbb35757f8d0d54812b6174ed8dd952aa08429bcc3/websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215", size = 183152 },
{ url = "https://files.pythonhosted.org/packages/74/45/c205c8480eafd114b428284840da0b1be9ffd0e4f87338dc95dc6ff961a1/websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5", size = 182096 },
{ url = "https://files.pythonhosted.org/packages/14/8f/aa61f528fba38578ec553c145857a181384c72b98156f858ca5c8e82d9d3/websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65", size = 182523 },
{ url = "https://files.pythonhosted.org/packages/ec/6d/0267396610add5bc0d0d3e77f546d4cd287200804fe02323797de77dbce9/websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe", size = 182790 },
{ url = "https://files.pythonhosted.org/packages/02/05/c68c5adbf679cf610ae2f74a9b871ae84564462955d991178f95a1ddb7dd/websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4", size = 182165 },
{ url = "https://files.pythonhosted.org/packages/29/93/bb672df7b2f5faac89761cb5fa34f5cec45a4026c383a4b5761c6cea5c16/websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597", size = 182160 },
{ url = "https://files.pythonhosted.org/packages/ff/83/de1f7709376dc3ca9b7eeb4b9a07b4526b14876b6d372a4dc62312bebee0/websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9", size = 176395 },
{ url = "https://files.pythonhosted.org/packages/7d/71/abf2ebc3bbfa40f391ce1428c7168fb20582d0ff57019b69ea20fa698043/websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7", size = 176841 },
{ url = "https://files.pythonhosted.org/packages/cb/9f/51f0cf64471a9d2b4d0fc6c534f323b664e7095640c34562f5182e5a7195/websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931", size = 175440 },
{ url = "https://files.pythonhosted.org/packages/8a/05/aa116ec9943c718905997412c5989f7ed671bc0188ee2ba89520e8765d7b/websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675", size = 173098 },
{ url = "https://files.pythonhosted.org/packages/ff/0b/33cef55ff24f2d92924923c99926dcce78e7bd922d649467f0eda8368923/websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151", size = 173329 },
{ url = "https://files.pythonhosted.org/packages/31/1d/063b25dcc01faa8fada1469bdf769de3768b7044eac9d41f734fd7b6ad6d/websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22", size = 183111 },
{ url = "https://files.pythonhosted.org/packages/93/53/9a87ee494a51bf63e4ec9241c1ccc4f7c2f45fff85d5bde2ff74fcb68b9e/websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f", size = 182054 },
{ url = "https://files.pythonhosted.org/packages/ff/b2/83a6ddf56cdcbad4e3d841fcc55d6ba7d19aeb89c50f24dd7e859ec0805f/websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8", size = 182496 },
{ url = "https://files.pythonhosted.org/packages/98/41/e7038944ed0abf34c45aa4635ba28136f06052e08fc2168520bb8b25149f/websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375", size = 182829 },
{ url = "https://files.pythonhosted.org/packages/e0/17/de15b6158680c7623c6ef0db361da965ab25d813ae54fcfeae2e5b9ef910/websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d", size = 182217 },
{ url = "https://files.pythonhosted.org/packages/33/2b/1f168cb6041853eef0362fb9554c3824367c5560cbdaad89ac40f8c2edfc/websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4", size = 182195 },
{ url = "https://files.pythonhosted.org/packages/86/eb/20b6cdf273913d0ad05a6a14aed4b9a85591c18a987a3d47f20fa13dcc47/websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa", size = 176393 },
{ url = "https://files.pythonhosted.org/packages/1b/6c/c65773d6cab416a64d191d6ee8a8b1c68a09970ea6909d16965d26bfed1e/websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561", size = 176837 },
{ url = "https://files.pythonhosted.org/packages/fa/a8/5b41e0da817d64113292ab1f8247140aac61cbf6cfd085d6a0fa77f4984f/websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f", size = 169743 },
]

70
docker-compose.yml Normal file
View File

@ -0,0 +1,70 @@
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
restart: always
env_file:
- ./.env
volumes:
- ./credentials:/app/credentials:ro
networks:
- default
labels:
- "traefik.enable=true"
- "traefik.http.routers.learn-api.rule=Host(`learn-indonesian.velouria.dev`,`learn-german.velouria.dev`) && PathPrefix(`/api/`,`/ws/`)"
- "traefik.http.routers.learn-api.entrypoints=websecure"
- "traefik.http.routers.learn-api.tls.certresolver=myresolver"
- "traefik.http.services.learn-api.loadbalancer.server.port=8000"
- "traefik.docker.network=traefik_network"
- "homepage.group=Education"
- "homepage.name=Learn API"
- "homepage.description=Language Learning API"
indonesian-app:
build:
context: ./apps/indonesian-app
dockerfile: Dockerfile
restart: always
depends_on:
- backend
networks:
- default
- traefik_network
labels:
- "traefik.enable=true"
- "traefik.http.routers.learn-indonesian.rule=Host(`learn-indonesian.velouria.dev`)"
- "traefik.http.routers.learn-indonesian.entrypoints=websecure"
- "traefik.http.routers.learn-indonesian.tls.certresolver=myresolver"
- "traefik.http.services.learn-indonesian.loadbalancer.server.port=80"
- "traefik.docker.network=traefik_network"
- "homepage.group=Education"
- "homepage.name=Learn Indonesian"
- "homepage.description=Indonesian Language Learning"
german-app:
build:
context: ./apps/german-app
dockerfile: Dockerfile
restart: always
depends_on:
- backend
networks:
- default
- traefik_network
labels:
- "traefik.enable=true"
- "traefik.http.routers.learn-german.rule=Host(`learn-german.velouria.dev`)"
- "traefik.http.routers.learn-german.entrypoints=websecure"
- "traefik.http.routers.learn-german.tls.certresolver=myresolver"
- "traefik.http.services.learn-german.loadbalancer.server.port=80"
- "traefik.docker.network=traefik_network"
- "homepage.group=Education"
- "homepage.name=Learn German"
- "homepage.description=German Language Learning"
networks:
default:
name: learn-languages_default
traefik_network:
external: true

25
package.json Normal file
View File

@ -0,0 +1,25 @@
{
"name": "learn-indo-workspace",
"version": "1.0.0",
"description": "Language learning platform with Indonesian and German apps",
"private": true,
"scripts": {
"dev:indonesian": "cd apps/indonesian-app && npm run dev",
"dev:german": "cd apps/german-app && npm run dev",
"dev:backend": "cd backend && python -m uvicorn main:app --reload --host 0.0.0.0 --port 8000",
"dev:all": "concurrently \"npm run dev:backend\" \"npm run dev:indonesian\" \"npm run dev:german\"",
"build:indonesian": "cd apps/indonesian-app && npm run build",
"build:german": "cd apps/german-app && npm run build",
"build:all": "npm run build:indonesian && npm run build:german",
"install:indonesian": "cd apps/indonesian-app && npm install",
"install:german": "cd apps/german-app && npm install",
"install:all": "npm run install:indonesian && npm run install:german",
"start:backend": "cd backend && python -m uvicorn main:app --host 0.0.0.0 --port 8000"
},
"devDependencies": {
"concurrently": "^8.2.0"
},
"workspaces": [
"apps/*"
]
}

102
start-street-lingo.sh Executable file
View File

@ -0,0 +1,102 @@
#!/bin/bash
# Street Lingo Development Startup Script
echo "🌍 Starting Street Lingo Platform..."
echo ""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
# Check if we're in the right directory
if [ ! -f "backend/main.py" ]; then
print_error "Please run this script from the root directory of the project"
exit 1
fi
# Start backend
print_status "Starting backend server on port 8000..."
cd backend
uv run python main.py &
BACKEND_PID=$!
cd ..
# Wait a moment for backend to start
sleep 3
# Start Indonesian app
print_status "Starting Indonesian app (Street Lingo Indo) on port 3000..."
cd apps/indonesian-app
npm install
npm run dev &
INDO_PID=$!
cd ../..
# Start German app
print_status "Starting German app (Street Lingo Berlin) on port 3001..."
cd apps/german-app
npm install
npm run dev &
GERMAN_PID=$!
cd ../..
# Wait a moment for apps to start
sleep 5
echo ""
print_info "🎉 Street Lingo Platform is now running!"
echo ""
echo "📱 Applications:"
echo " 🇮🇩 Indonesian App: http://localhost:3000"
echo " 🇩🇪 German App: http://localhost:3001"
echo ""
echo "🔧 Backend API:"
echo " 📡 Main API: http://localhost:8000"
echo " 📊 API Docs: http://localhost:8000/docs"
echo ""
echo "🌐 WebSocket Endpoints:"
echo " 🇮🇩 Indonesian WS: ws://localhost:8000/ws/speech/indonesian"
echo " 🇩🇪 German WS: ws://localhost:8000/ws/speech/german"
echo ""
echo "🎯 API Routes:"
echo " 🇮🇩 Indonesian API: http://localhost:8000/api/scenarios/indonesian"
echo " 🇩🇪 German API: http://localhost:8000/api/scenarios/german"
echo ""
print_warning "Press Ctrl+C to stop all services"
echo ""
# Function to cleanup processes
cleanup() {
print_status "Stopping all Street Lingo services..."
kill $BACKEND_PID 2>/dev/null
kill $INDO_PID 2>/dev/null
kill $GERMAN_PID 2>/dev/null
print_status "All services stopped. Goodbye!"
exit 0
}
# Set up signal handlers
trap cleanup SIGINT SIGTERM
# Wait for user to stop
wait

157
test-setup.py Normal file
View File

@ -0,0 +1,157 @@
#!/usr/bin/env python3
"""
Street Lingo Platform Setup Test
Tests backend functionality and API endpoints
"""
import requests
import json
import sys
from typing import Dict, Any
def test_backend_health():
"""Test if backend is running and healthy"""
try:
response = requests.get("http://localhost:8000/api/health", timeout=5)
if response.status_code == 200:
print("✅ Backend health check passed")
return True
else:
print(f"❌ Backend health check failed: {response.status_code}")
return False
except Exception as e:
print(f"❌ Backend not reachable: {e}")
return False
def test_scenarios_api():
"""Test scenarios API endpoints"""
print("\n🧪 Testing Scenarios API...")
# Test Indonesian scenarios
try:
response = requests.get("http://localhost:8000/api/scenarios/indonesian", timeout=5)
if response.status_code == 200:
data = response.json()
scenarios = list(data.keys())
print(f"✅ Indonesian scenarios: {scenarios}")
else:
print(f"❌ Indonesian scenarios failed: {response.status_code}")
except Exception as e:
print(f"❌ Indonesian scenarios error: {e}")
# Test German scenarios
try:
response = requests.get("http://localhost:8000/api/scenarios/german", timeout=5)
if response.status_code == 200:
data = response.json()
scenarios = list(data.keys())
print(f"✅ German scenarios: {scenarios}")
else:
print(f"❌ German scenarios failed: {response.status_code}")
except Exception as e:
print(f"❌ German scenarios error: {e}")
# Test all scenarios
try:
response = requests.get("http://localhost:8000/api/scenarios", timeout=5)
if response.status_code == 200:
data = response.json()
languages = list(data.keys())
print(f"✅ All scenarios endpoint: {languages}")
else:
print(f"❌ All scenarios failed: {response.status_code}")
except Exception as e:
print(f"❌ All scenarios error: {e}")
def test_translation_api():
"""Test translation API"""
print("\n🧪 Testing Translation API...")
test_data = {
"text": "Hallo, wie geht es dir?",
"source_language": "de",
"target_language": "en"
}
try:
response = requests.post(
"http://localhost:8000/api/translate",
json=test_data,
timeout=10
)
if response.status_code == 200:
result = response.json()
print(f"✅ Translation API works: '{test_data['text']}''{result['translation']}'")
else:
print(f"❌ Translation API failed: {response.status_code}")
except Exception as e:
print(f"❌ Translation API error: {e}")
def test_frontend_accessibility():
"""Test if frontend apps are accessible"""
print("\n🧪 Testing Frontend Accessibility...")
# Test Indonesian app
try:
response = requests.get("http://localhost:3000", timeout=5)
if response.status_code == 200:
print("✅ Indonesian app (port 3000) is accessible")
else:
print(f"❌ Indonesian app failed: {response.status_code}")
except Exception as e:
print(f"⚠️ Indonesian app not accessible: {e}")
# Test German app
try:
response = requests.get("http://localhost:3001", timeout=5)
if response.status_code == 200:
print("✅ German app (port 3001) is accessible")
else:
print(f"❌ German app failed: {response.status_code}")
except Exception as e:
print(f"⚠️ German app not accessible: {e}")
def print_summary():
"""Print setup summary"""
print("\n" + "="*50)
print("🌍 STREET LINGO PLATFORM SETUP SUMMARY")
print("="*50)
print("\n📱 Access Your Apps:")
print(" 🇮🇩 Indonesian: http://localhost:3000")
print(" 🇩🇪 German: http://localhost:3001")
print("\n🔧 Backend API:")
print(" 📡 Main API: http://localhost:8000")
print(" 📊 API Docs: http://localhost:8000/docs")
print("\n🎯 Test Scenarios:")
print(" 🇮🇩 Indonesian: http://localhost:8000/api/scenarios/indonesian")
print(" 🇩🇪 German: http://localhost:8000/api/scenarios/german")
print("\n💡 Next Steps:")
print(" 1. Open the apps in your browser")
print(" 2. Grant microphone permissions")
print(" 3. Try a conversation scenario")
print(" 4. Check the API docs for more endpoints")
print("\n🚀 Happy Learning!")
def main():
"""Main test function"""
print("🧪 STREET LINGO PLATFORM TEST")
print("=" * 40)
# Test backend health
if not test_backend_health():
print("\n❌ Backend is not running. Please start it first:")
print(" cd backend && python main.py")
sys.exit(1)
# Test API endpoints
test_scenarios_api()
test_translation_api()
# Test frontend accessibility
test_frontend_accessibility()
# Print summary
print_summary()
if __name__ == "__main__":
main()