Neko Agent Setup Guide

Everything you need to install, configure, and run Neko Agent

OpenClaw
Production ReadyOpenClaw Powered

Getting Started

Neko Agent is a fully autonomous AI agent powered by OpenClaw reasoning infrastructure. She runs 24/7, controls her own X account, manages chat conversations, has her own capital for buybacks, and can do anything a human operator can — completely autonomously. This guide walks you through setting up your own instance.

5 Minutes

Clone, configure, and run locally in under 5 minutes.

2 API Keys

Just need OpenClaw token and ElevenLabs key to get started.

Deploy Anywhere

Vercel, Railway, or any Node.js host. One-click deploy.

bash
# Quick start
git clone https://github.com/your-org/neko-agent.git
cd neko-agent
npm install
cp .env.local.example .env.local
# Edit .env.local with your API keys (see sections below)
npm run dev
# Open http://localhost:3000

OpenClaw Setup

OpenClaw is the reasoning infrastructure that powers every response Neko generates. It provides a standard OpenAI-compatible chat/completions API endpoint.

OpenClawHow OpenClaw Works

OpenClaw provides an API endpoint that accepts standard chat completion requests. Every message you send to Neko is routed through OpenClaw for reasoning.

  • Compatible with OpenAI SDK format — drop-in replacement
  • Uses gpt-4o-mini model for fast reasoning
  • Server-side authentication — tokens never reach the browser
  • Sub-200ms latency for most queries
typescript
// OpenClaw API call (how Neko uses it internally)
const response = await fetch(OPENCLAW_API_URL + "/v1/chat/completions", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer " + OPENCLAW_API_TOKEN,
  },
  body: JSON.stringify({
    model: "gpt-4o-mini",
    messages: [
      { role: "system", content: NEKO_SYSTEM_PROMPT },
      ...conversationHistory,
    ],
    temperature: 0.7,
    max_tokens: 2048,
  }),
});

Environment Variables

All secrets are stored server-side. Never exposed to the browser.

OPENCLAW_API_URLRequired

OpenClaw API base URL

Example: http://149.51.39.9:18789/

OPENCLAW_API_TOKENRequired

Your OpenClaw authentication token

Example: your-token-here

ELEVENLABS_API_KEYRequired

ElevenLabs API key for TTS voice

Example: sk_...

ELEVENLABS_VOICE_IDRequired

Voice ID for Neko agent

Example: uyfkySFC5J00qZ6iLAdh

OPENAI_API_KEY

OpenAI key (legacy/fallback support)

Example: sk-proj-...

GROK_API_KEY

Grok API key for image generation

Example: xai-...

APP_BASE_URL

Your deployment URL

Example: https://nekovirtual.com

bash
# .env.local
OPENCLAW_API_URL=http://149.51.39.9:18789/
OPENCLAW_API_TOKEN=your-openclaw-token
ELEVENLABS_API_KEY=sk_your-elevenlabs-key
ELEVENLABS_VOICE_ID=uyfkySFC5J00qZ6iLAdh
APP_BASE_URL=http://localhost:3000

Discord Bot Setup

Run Neko as a Discord bot. She connects to the same OpenClaw backend for all reasoning.

1
Create a Discord Application
Go to discord.com/developers, create a new app, enable Message Content Intent, and copy the bot token.
2
Set up the bot project
Create a new Node.js project and install discord.js.
3
Configure environment
Set DISCORD_TOKEN and NEKO_API (your deployment URL or localhost).
4
Run the bot
Start with node bot.js — Neko is now live in your Discord server.
javascript
// bot.js — Neko Discord Bot
const { Client, GatewayIntentBits } = require("discord.js");
const client = new Client({ 
  intents: [GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent] 
});

const NEKO_API = process.env.NEKO_API || "http://localhost:3000";

client.on("messageCreate", async (message) => {
  if (message.author.bot) return;
  if (!message.content.startsWith("!neko ")) return;
  const query = message.content.slice(6);
  try {
    const res = await fetch(NEKO_API + "/api/chat", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ messages: [{ role: "user", content: query }] }),
    });
    const data = await res.json();
    await message.reply(data.ok ? data.reply : "Error: " + (data.error?.message || "Unknown"));
  } catch { await message.reply("Connection error."); }
});

client.login(process.env.DISCORD_TOKEN);

Telegram Bot Setup

Deploy Neko on Telegram — same OpenClaw reasoning, different platform.

python
# telegram_bot.py
import requests
from telegram import Update
from telegram.ext import Application, MessageHandler, filters

NEKO_API = "https://your-deployment.vercel.app"

async def handle(update: Update, context):
    res = requests.post(
        f"{NEKO_API}/api/chat",
        json={"messages": [{"role": "user", "content": update.message.text}]},
    )
    data = res.json()
    await update.message.reply_text(data.get("reply", "Error"))

app = Application.builder().token("YOUR_TG_TOKEN").build()
app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle))
app.run_polling()

Local Development

Full local setup with hot reloading, Live2D avatar, TTS, and all features.

bash
# Prerequisites: Node.js 18+, npm
git clone https://github.com/your-org/neko-agent.git
cd neko-agent
npm install
cp .env.local.example .env.local
# Fill in your API keys in .env.local
npm run dev
# Visit http://localhost:3000

# Build for production
npm run build
npm start
Full Live2D avatar with expressions
ElevenLabs TTS with lip sync
Camera-based face tracking
OpenClaw reasoning (same as prod)
Companion builder & presets
Hot module reloading
All API endpoints
No rate limits locally

Camera & Live2D

Neko supports webcam face tracking to mirror your expressions onto the Live2D model in real-time.

1
Grant camera permissions
Browser will prompt for webcam access. Uses MediaPipe Face Mesh — all processing happens client-side.
2
Position yourself
Sit facing your webcam. The model tracks head tilt, eye blinks, mouth movement, and eyebrow position.
3
Fine-tune in Companion Builder
Visit /companion and use the Fine-Tune tab to adjust tracking sensitivity and save presets.
4
Works locally and in production
Camera tracking works on localhost:3000 and on your deployed Vercel instance. HTTPS required in production.

TTS & Voice Setup

Neko speaks using ElevenLabs text-to-speech with real-time lip sync on the Live2D model.

How TTS Works

1. Assistant response text is sent to /api/tts

2. Server calls ElevenLabs with your voice ID and API key

3. Audio returns as base64 MP3 and plays in the browser

4. AudioContext analyzes frequencies for real-time lip sync on the Live2D model

bash
# Required env vars for TTS
ELEVENLABS_API_KEY=sk_your-key-here
ELEVENLABS_VOICE_ID=uyfkySFC5J00qZ6iLAdh

# Voice ID is the ElevenLabs voice to use
# Default: uyfkySFC5J00qZ6iLAdh (Neko's voice)
# You can use any ElevenLabs voice ID here

# TTS is auto-triggered after every assistant response
# Users can also click the speaker icon on any message

Create More Neko Agents

Fork the project to create your own custom AI agents with different personalities, voices, and Live2D models.

Custom System Prompt

Edit the system prompt in /api/chat/route.ts to give your agent a unique personality, knowledge base, and behavior patterns.

Custom Voice

Create or clone a voice in ElevenLabs, get the voice ID, and set it as ELEVENLABS_VOICE_ID. Your agent now has a unique voice.

Custom Live2D Model

Replace the model files in /public/live2d/koshino/ with your own .moc3 model. Update expressions in /lib/live2d/expressions.ts.

Custom Domain

Deploy to Vercel and connect your custom domain. Each agent gets its own URL, identity, and branding.

bash
# Fork and create your own agent
git clone https://github.com/your-org/neko-agent.git my-agent
cd my-agent

# 1. Edit system prompt
#    app/api/chat/route.ts → change the system message

# 2. Set your voice
#    .env.local → ELEVENLABS_VOICE_ID=your-voice-id

# 3. Replace Live2D model (optional)
#    public/live2d/koshino/ → replace with your .moc3 files
#    lib/live2d/expressions.ts → update expression mappings

# 4. Deploy
npx vercel --prod

Connect to Neko's API

Hit Neko's endpoints from any client — curl, Python, Node.js, or your own app.

bash
# Chat with Neko from command line
curl -X POST https://nekovirtual.com/api/chat \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "Hello Neko!"}]}'

# Get TTS audio
curl -X POST https://nekovirtual.com/api/tts \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello from Neko Agent!"}'
python
# Python example
import requests

# Chat
r = requests.post("https://nekovirtual.com/api/chat", json={
    "messages": [{"role": "user", "content": "What can you do?"}]
})
print(r.json()["reply"])

# TTS
r = requests.post("https://nekovirtual.com/api/tts", json={
    "text": "Hello from Neko!"
})
audio_b64 = r.json()["audio"]  # base64 MP3

Deploy to Production

One-command deploy to Vercel with all features.

bash
# Deploy to Vercel
npm i -g vercel
vercel

# Set environment variables on Vercel dashboard:
# OPENCLAW_API_URL, OPENCLAW_API_TOKEN
# ELEVENLABS_API_KEY, ELEVENLABS_VOICE_ID

# Or via CLI:
vercel env add OPENCLAW_API_URL
vercel env add OPENCLAW_API_TOKEN
vercel env add ELEVENLABS_API_KEY
vercel env add ELEVENLABS_VOICE_ID

# Redeploy with env vars
vercel --prod

Architecture Overview

How all the pieces fit together.

System Architecture
Frontend
Next.js + Live2D
API Layer
Next.js Routes
OpenClaw
Reasoning Engine
ElevenLabs
TTS Voice

Security

All API keys server-side only. Bearer token auth. No client exposure. Vercel edge deployment.

Performance

~200ms reasoning latency. Turbo TTS model. Edge-deployed. Streaming-ready architecture.

Storage

No server-side chat storage. Local browser state only. Privacy by default.

Reliability

Auto-retry on failures. Graceful degradation. Fallback providers. Health monitoring.