Skip to content

neo - Minnova AI Assistant

Status: Planning Language: Go Repo: github.com/minnova-io/neo (to be created)

A conversational CLI assistant (Matrix-themed, alongside oracle) that helps manage client work by reading context from the knowledge-base and client codebases.


MVP

Simplest useful version (~400 lines of Go):

  • Chat REPL
  • Claude API (Ollama later)
  • Reads knowledge-base for client context
  • Reads client codebase (grep, read files)
  • Writes to knowledge-base (journal entries)

What neo Reads

Two sources:

  1. Knowledge-base (on client select): README.md, journal entries
  2. Client codebase (on-demand): LLM decides when to grep/read files

What neo Writes

flowchart LR
    NEO[neo] --> JOURNAL[docs/clients/*/journal/2026-01-07.md]

Appends entries to journal (bugs reported, work done, notes). All markdown, git-tracked.

Example Session

$ neo
Clients: featbank, kraken

> working on featbank

Loaded featbank context.

> client reported boleto PDF not generating

Searching codebase...
Found in src/billing/boleto/pdf.service.ts:45

Logged to journal/2026-01-07.md

Draft response:
"Oi! Vou investigar o problema com o PDF do boleto."

> exit

Problem

As a 2-person consultancy with multiple clients across multiple channels (Teams, Jira, Azure DevOps):

  • Context switching is expensive
  • Client communications are scattered
  • No central place to track what needs to be done
  • Manual work to log time, create tasks, draft responses

Solution

A CLI tool that:

  1. Reads knowledge-base to understand client context
  2. Reads client codebases to help investigate issues
  3. Writes to knowledge-base to log work and track tasks
  4. Chats naturally - LLM decides what tools to use

Architecture

flowchart TB
    subgraph Input
        CLI[neo CLI]
    end

    subgraph Core
        REPL[Chat REPL]
        LOOP[Tool Calling Loop]
    end

    subgraph LLM["LLM (Claude now, Ollama later)"]
        CLAUDE[Claude API]
    end

    subgraph Tools["Tools (LLM calls these)"]
        KB_READ[kb_read]
        KB_LIST[kb_list]
        KB_APPEND[kb_append]
        CODE_GREP[code_grep]
        CODE_READ[code_read]
    end

    subgraph Storage
        KB[(Knowledge Base)]
        CODE[(Client Codebase)]
    end

    CLI --> REPL
    REPL <--> LOOP
    LOOP <--> CLAUDE
    CLAUDE -->|tool calls| Tools
    Tools --> KB
    Tools --> CODE

How Tool Calling Works

Key insight: You don't implement logic for "which file to search". The LLM decides - neo just executes.

sequenceDiagram
    participant User
    participant neo
    participant Claude
    participant Tools

    User->>neo: "find the boleto bug"
    neo->>Claude: [user message + tool definitions]
    Note over Claude: Decides to search code
    Claude->>neo: tool_call: code_grep("boleto", "src/")
    neo->>Tools: exec: rg "boleto" src/
    Tools-->>neo: results
    neo->>Claude: [tool result]
    Note over Claude: Decides to read file
    Claude->>neo: tool_call: code_read("src/.../pdf.service.ts")
    neo->>Tools: exec: cat src/.../pdf.service.ts
    Tools-->>neo: file contents
    neo->>Claude: [tool result]
    Note over Claude: Has enough info
    Claude-->>neo: "Bug is likely in line 45..."
    neo-->>User: Shows response

Your code: Execute tools, return results (~50 lines per tool) LLM's job: Decide strategy, analyze results, respond


MVP Tools

Tool Type Description
kb_read Read Read file from knowledge-base
kb_list Read List files in KB directory
kb_append Write Append content to KB file
code_grep Read Search client codebase (ripgrep)
code_read Read Read file from client codebase

Implementation (simple)

type Tools struct {
    kbPath       string // /home/.../knowledge-base
    codebasePath string // /home/.../projects/featbank
}

func (t *Tools) KBRead(path string) (string, error) {
    return os.ReadFile(filepath.Join(t.kbPath, path))
}

func (t *Tools) KBList(path string) ([]string, error) {
    return filepath.Glob(filepath.Join(t.kbPath, path, "*"))
}

func (t *Tools) KBAppend(path, content string) error {
    f, _ := os.OpenFile(filepath.Join(t.kbPath, path), os.O_APPEND|os.O_WRONLY, 0644)
    defer f.Close()
    _, err := f.WriteString(content)
    return err
}

func (t *Tools) CodeGrep(pattern, path string) (string, error) {
    cmd := exec.Command("rg", pattern, filepath.Join(t.codebasePath, path))
    output, _ := cmd.Output()
    return string(output), nil
}

func (t *Tools) CodeRead(path string) (string, error) {
    return os.ReadFile(filepath.Join(t.codebasePath, path))
}

Configuration

# ~/.neo/config.yaml

# Knowledge base location
knowledge_base: /home/phcurado/Documents/minnova/knowledge-base

# LLM (Claude for MVP)
llm:
  provider: claude
  model: claude-sonnet-4-20250514
  # api_key from ANTHROPIC_API_KEY env var

# Clients
clients:
  featbank:
    kb_path: docs/clients/featbank
    codebase: /home/phcurado/projects/featbank
  kraken:
    kb_path: docs/clients/kraken
    codebase: /home/phcurado/projects/kraken

File Structure (MVP)

neo/
├── cmd/
│   └── neo/
│       └── main.go           # Entry, config loading
├── internal/
│   ├── repl/
│   │   └── repl.go           # Chat loop
│   ├── llm/
│   │   └── claude.go         # Claude API + tool loop
│   ├── tools/
│   │   └── tools.go          # KB and code tools
│   └── config/
│       └── config.go         # YAML config
├── go.mod
└── README.md

Knowledge-Base Structure

neo uses your existing structure:

knowledge-base/docs/clients/
├── featbank/
│   ├── README.md             # Client overview
│   └── journal/              # Daily entries (neo appends here)
│       └── 2025-01-05.md
└── kraken/
    ├── README.md
    └── journal/

Roadmap

v0.1 - MVP (current focus)

  • REPL chat loop
  • Claude API with tool calling
  • 5 tools: kb_read, kb_list, kb_append, code_grep, code_read
  • YAML config loading
  • Client context loading on select

v0.2 - Usability

  • Conversation history (in-memory)
  • Multi-client switching mid-session
  • Draft responses in user's style
  • Daily briefing command

v0.3 - Privacy (Ollama)

  • Ollama provider
  • Route code analysis to local LLM
  • Keep Claude for KB and drafting

v1.0 - Integrations

  • Kimai time tracking
  • Git operations (commit, PR)
  • n8n webhooks
  • Jira/Linear backends

Future: Privacy with Ollama

For MVP, Claude API is fine (your KB is not sensitive).

Later, add Ollama for client code:

Task Provider Why
KB read/write Claude Your docs, OK to send
Code analysis Ollama Code never leaves machine
Draft responses Claude Better quality

Open Questions

  1. Confirmation for writes? Always ask before appending to KB?
  2. Client switching? Explicit command or detect from conversation?
  3. Session history? Save to file or memory only?

References