Neko Agent Setup Guide
Everything you need to install, configure, and run Neko Agent

Setup Guide
Getting Started
Neko Agent is a fully autonomous AI agent powered by OpenClaw reasoning infrastructure. She runs 24/7, controls her own X account, manages chat conversations, has her own capital for buybacks, and can do anything a human operator can — completely autonomously. This guide walks you through setting up your own instance.
5 Minutes
Clone, configure, and run locally in under 5 minutes.
2 API Keys
Just need OpenClaw token and ElevenLabs key to get started.
Deploy Anywhere
Vercel, Railway, or any Node.js host. One-click deploy.
# Quick start
git clone https://github.com/your-org/neko-agent.git
cd neko-agent
npm install
cp .env.local.example .env.local
# Edit .env.local with your API keys (see sections below)
npm run dev
# Open http://localhost:3000OpenClaw Setup
OpenClaw is the reasoning infrastructure that powers every response Neko generates. It provides a standard OpenAI-compatible chat/completions API endpoint.
How OpenClaw Works
OpenClaw provides an API endpoint that accepts standard chat completion requests. Every message you send to Neko is routed through OpenClaw for reasoning.
- Compatible with OpenAI SDK format — drop-in replacement
- Uses
gpt-4o-minimodel for fast reasoning - Server-side authentication — tokens never reach the browser
- Sub-200ms latency for most queries
// OpenClaw API call (how Neko uses it internally)
const response = await fetch(OPENCLAW_API_URL + "/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + OPENCLAW_API_TOKEN,
},
body: JSON.stringify({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: NEKO_SYSTEM_PROMPT },
...conversationHistory,
],
temperature: 0.7,
max_tokens: 2048,
}),
});Environment Variables
All secrets are stored server-side. Never exposed to the browser.
OPENCLAW_API_URLRequiredOpenClaw API base URL
Example: http://149.51.39.9:18789/
OPENCLAW_API_TOKENRequiredYour OpenClaw authentication token
Example: your-token-here
ELEVENLABS_API_KEYRequiredElevenLabs API key for TTS voice
Example: sk_...
ELEVENLABS_VOICE_IDRequiredVoice ID for Neko agent
Example: uyfkySFC5J00qZ6iLAdh
OPENAI_API_KEYOpenAI key (legacy/fallback support)
Example: sk-proj-...
GROK_API_KEYGrok API key for image generation
Example: xai-...
APP_BASE_URLYour deployment URL
Example: https://nekovirtual.com
# .env.local
OPENCLAW_API_URL=http://149.51.39.9:18789/
OPENCLAW_API_TOKEN=your-openclaw-token
ELEVENLABS_API_KEY=sk_your-elevenlabs-key
ELEVENLABS_VOICE_ID=uyfkySFC5J00qZ6iLAdh
APP_BASE_URL=http://localhost:3000Discord Bot Setup
Run Neko as a Discord bot. She connects to the same OpenClaw backend for all reasoning.
// bot.js — Neko Discord Bot
const { Client, GatewayIntentBits } = require("discord.js");
const client = new Client({
intents: [GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent]
});
const NEKO_API = process.env.NEKO_API || "http://localhost:3000";
client.on("messageCreate", async (message) => {
if (message.author.bot) return;
if (!message.content.startsWith("!neko ")) return;
const query = message.content.slice(6);
try {
const res = await fetch(NEKO_API + "/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages: [{ role: "user", content: query }] }),
});
const data = await res.json();
await message.reply(data.ok ? data.reply : "Error: " + (data.error?.message || "Unknown"));
} catch { await message.reply("Connection error."); }
});
client.login(process.env.DISCORD_TOKEN);Telegram Bot Setup
Deploy Neko on Telegram — same OpenClaw reasoning, different platform.
# telegram_bot.py
import requests
from telegram import Update
from telegram.ext import Application, MessageHandler, filters
NEKO_API = "https://your-deployment.vercel.app"
async def handle(update: Update, context):
res = requests.post(
f"{NEKO_API}/api/chat",
json={"messages": [{"role": "user", "content": update.message.text}]},
)
data = res.json()
await update.message.reply_text(data.get("reply", "Error"))
app = Application.builder().token("YOUR_TG_TOKEN").build()
app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle))
app.run_polling()Local Development
Full local setup with hot reloading, Live2D avatar, TTS, and all features.
# Prerequisites: Node.js 18+, npm
git clone https://github.com/your-org/neko-agent.git
cd neko-agent
npm install
cp .env.local.example .env.local
# Fill in your API keys in .env.local
npm run dev
# Visit http://localhost:3000
# Build for production
npm run build
npm startCamera & Live2D
Neko supports webcam face tracking to mirror your expressions onto the Live2D model in real-time.
TTS & Voice Setup
Neko speaks using ElevenLabs text-to-speech with real-time lip sync on the Live2D model.
How TTS Works
1. Assistant response text is sent to /api/tts
2. Server calls ElevenLabs with your voice ID and API key
3. Audio returns as base64 MP3 and plays in the browser
4. AudioContext analyzes frequencies for real-time lip sync on the Live2D model
# Required env vars for TTS
ELEVENLABS_API_KEY=sk_your-key-here
ELEVENLABS_VOICE_ID=uyfkySFC5J00qZ6iLAdh
# Voice ID is the ElevenLabs voice to use
# Default: uyfkySFC5J00qZ6iLAdh (Neko's voice)
# You can use any ElevenLabs voice ID here
# TTS is auto-triggered after every assistant response
# Users can also click the speaker icon on any messageCreate More Neko Agents
Fork the project to create your own custom AI agents with different personalities, voices, and Live2D models.
Custom System Prompt
Edit the system prompt in /api/chat/route.ts to give your agent a unique personality, knowledge base, and behavior patterns.
Custom Voice
Create or clone a voice in ElevenLabs, get the voice ID, and set it as ELEVENLABS_VOICE_ID. Your agent now has a unique voice.
Custom Live2D Model
Replace the model files in /public/live2d/koshino/ with your own .moc3 model. Update expressions in /lib/live2d/expressions.ts.
Custom Domain
Deploy to Vercel and connect your custom domain. Each agent gets its own URL, identity, and branding.
# Fork and create your own agent
git clone https://github.com/your-org/neko-agent.git my-agent
cd my-agent
# 1. Edit system prompt
# app/api/chat/route.ts → change the system message
# 2. Set your voice
# .env.local → ELEVENLABS_VOICE_ID=your-voice-id
# 3. Replace Live2D model (optional)
# public/live2d/koshino/ → replace with your .moc3 files
# lib/live2d/expressions.ts → update expression mappings
# 4. Deploy
npx vercel --prodConnect to Neko's API
Hit Neko's endpoints from any client — curl, Python, Node.js, or your own app.
# Chat with Neko from command line
curl -X POST https://nekovirtual.com/api/chat \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Hello Neko!"}]}'
# Get TTS audio
curl -X POST https://nekovirtual.com/api/tts \
-H "Content-Type: application/json" \
-d '{"text": "Hello from Neko Agent!"}'# Python example
import requests
# Chat
r = requests.post("https://nekovirtual.com/api/chat", json={
"messages": [{"role": "user", "content": "What can you do?"}]
})
print(r.json()["reply"])
# TTS
r = requests.post("https://nekovirtual.com/api/tts", json={
"text": "Hello from Neko!"
})
audio_b64 = r.json()["audio"] # base64 MP3Deploy to Production
One-command deploy to Vercel with all features.
# Deploy to Vercel
npm i -g vercel
vercel
# Set environment variables on Vercel dashboard:
# OPENCLAW_API_URL, OPENCLAW_API_TOKEN
# ELEVENLABS_API_KEY, ELEVENLABS_VOICE_ID
# Or via CLI:
vercel env add OPENCLAW_API_URL
vercel env add OPENCLAW_API_TOKEN
vercel env add ELEVENLABS_API_KEY
vercel env add ELEVENLABS_VOICE_ID
# Redeploy with env vars
vercel --prodArchitecture Overview
How all the pieces fit together.
Security
All API keys server-side only. Bearer token auth. No client exposure. Vercel edge deployment.
Performance
~200ms reasoning latency. Turbo TTS model. Edge-deployed. Streaming-ready architecture.
Storage
No server-side chat storage. Local browser state only. Privacy by default.
Reliability
Auto-retry on failures. Graceful degradation. Fallback providers. Health monitoring.