Henrics Louwhoff

← Back to Articles

About Autograph

The "Quine" Concept

Autograph works as a self-replicating file. When you click Save Blog, you download a new HTML file that contains the application logic PLUS all your new posts baked directly into the code. This new file is your updated master copy.

Writing Posts

Click + New Post to open the editor. You can write in Markdown. Clicking "Publish to Page" saves the post to your browser's local storage immediately.

Saving & Publishing

Changes in local storage are temporary. To make them permanent and shareable, you must click Save Blog. This generates the standalone HTML file that you can upload to any web host (like GitHub Pages or Netlify).

Navigation

The homepage is a chronological list. Clicking a post title opens the Single View (?post=slug-id), which is URL-friendly and indexable by search engines.

Customization

Use the Cog Icon to change the Blog Title, Author Bio, Page Title (for browser tabs), Footer text, and SEO Meta tags.

Backup & Migration

Use the Export Data button in settings to save a JSON backup. You can Import this JSON into a new version of Autograph to migrate your content.

Settings

Data Management

Export: Download all posts and settings as a JSON file. Use this to backup your data or migrate to a new version of Autograph.

Import: Load data from a JSON file. Warning: Importing will clear your current browser local storage to prevent conflicts.

Autograph v1.0

Created by Henricus Louwhoff

The Benefits of Quine Apps

JANUARY 2, 2026
I've been experimenting with Quine apps recently, building a blog and a kanban board that work entirely offline. The approach is simple but powerful, and it's made me rethink how we build web applications.

A Quine app is self-contained code that can modify itself. When you add or change items in my kanban board, they're temporarily stored in your browser's local storage. When you're ready, you save a new version of the app that includes those changes baked directly into the code. The app essentially rewrites itself with your data inside it.

This approach offers three major benefits. First, hosting is remarkably easy. You can put the app anywhere that serves static files, from GitHub Pages to a simple web server. There's no backend to maintain, no database to configure, and no server costs to worry about.

Second, you don't need accounts or authentication. There's no sign-up process, no passwords to remember, and no personal information to hand over. You simply open the app and start using it. This removes friction and makes the apps incredibly accessible.

Third, and perhaps most importantly, you own your data completely. Everything lives in the app file itself. You can save it, back it up, or share it however you like. There's no company that can shut down and take your data with it, no terms of service that might change, and no privacy concerns about who can access your information.

The offline capability means these apps work anywhere, regardless of your internet connection. Once you've loaded the app, it's yours to use whenever and wherever you need it.

Quine apps won't suit every use case, but for personal tools, small projects, and situations where simplicity and data ownership matter, they offer an elegant alternative to traditional web applications.

The Shift to Spec-Driven Development: From Vibe Coding to Agent-Powered Precision

DECEMBER 31, 2025

What Is Spec-Driven Development?


Spec-driven development represents a fundamental shift in how we approach AI-assisted coding. Instead of the iterative "vibe coding" approach (where developers prompt an AI, review code, adjust, and repeat), spec-driven development flips the traditional relationship between specifications and code. The specification becomes the source of truth that generates implementation.

The core idea is simple: create structured documents in natural language that describe what your software should do. These specifications guide AI coding agents, ensuring they understand what to build, why it matters, and what not to build. The workflow usually involves defining requirements with acceptance criteria, creating technical designs that outline architecture, and breaking work into trackable tasks.

Unlike traditional documentation that becomes outdated once coding begins, these specifications stay at the centre of your engineering process. More importantly, specifications evolve alongside your software. When you add new features, you create new specs whilst keeping old ones as historical records. This creates an audit trail showing what was implemented, when it was built, and why decisions were made. Maintaining software means evolving specifications. Debugging means fixing the specifications that generated incorrect code. The code itself becomes an expression of intent you've defined at a higher level.

This living history of specifications becomes invaluable over time. Teams can trace back through spec versions to understand the reasoning behind architectural choices, see how requirements shifted, and maintain continuity as developers join or leave the project. The spec repository becomes your project's memory.

Implementing Spec-Driven Development with Kiro and GitHub Spec-kit


Two tools have emerged to make spec-driven development practical: AWS's Kiro and GitHub's Spec-kit.

Kiro is an AI-powered development environment built around specifications. When you start a project, Kiro offers two paths: "vibe" mode for quick prototyping, or "spec" mode for structured development. Spec mode guides you through three phases:

1. Requirements generation: Kiro transforms your natural language prompt into detailed user stories with acceptance criteria in EARS notation (Easy Approach to Requirements Syntax)
2. Design creation: The agent analyses your codebase and proposes architecture and technical stack
3. Implementation planning: Requirements and design are broken down into specific, trackable tasks

Kiro also includes "agent hooks" (event-driven automation that executes actions when specific events occur) and one-click "powers" that provide pre-packaged expertise for specific technologies like Amazon Aurora.

GitHub Spec-kit provides an open-source toolkit that works with various AI coding agents including GitHub Copilot, Claude Code, and Gemini CLI. Spec-kit follows a four-phase process:

1. Specification: Create a contract defining how your code should behave
2. Planning: The coding agent transforms the spec into a structured plan with architectural decisions
3. Tasks: The plan is broken into small, reviewable chunks
4. Implementation: The coding agent tackles tasks whilst you review focused changes

Spec-kit makes changing course simple: update the spec, regenerate the plan, and let the agent handle implementation. Your job is to steer whilst the agent writes.

The Evolution Towards Agent Developers with Human Oversight


We're witnessing a role reversal in software development. The traditional model (humans write code, occasionally write specifications as an afterthought) is giving way to a new paradigm where AI agents write code and humans focus on specification, architecture, and review.

This shift transforms developers from programmers into engineers who manage context, craft specifications, and validate outcomes. As one developer described it: working with language models is like managing an intern who has read every textbook but has zero practical experience with your specific codebase and forgets anything beyond the most recent hour of conversation.

The implications:

  • Humans move up the abstraction ladder: Instead of manipulating code, developers express intent in natural language, define system behaviour, and establish architectural constraints
  • Quality through structure: Forcing clear requirement definition upfront prevents costly rework and ensures AI-generated code aligns with organisational standards
  • Review becomes more meaningful: Rather than reviewing sprawling code changes and attempting to reverse-engineer intent, reviews follow a clear path from specification to plan to task to implementation
  • Agents handle mechanical translation: AI coding agents excel at translating specifications into working code, freeing humans to focus on creative problem-solving

This evolution doesn't diminish the role of developers. It elevates it. Developer expertise shifts from syntax and implementation details to system design, requirement clarity, and architectural decisions. The specification becomes the developer's primary output, and code becomes a continuously regenerated artefact.

As spec-driven development matures, software engineering means maintaining a clear understanding of what systems should do, with AI agents handling the increasingly automated work of making that understanding concrete in code.

What Happens When You Stop Writing Code and Start Explaining It Really Well

NOVEMBER 28, 2025
I was challenged to design a workforce management system with a few constraints: make it highly customisable with fewer primitives, break it into smaller services, and design it to be built 99.5% by AI agents rather than people. What would you do differently if an agent was the primary developer?

I'm building Mosaic to find out. It's a platform spanning HR, scheduling, engagement, and payroll. But really it's a proof of concept for myself. I want to explore what it means to architect a system and have it built by an agent whilst I stay in the leading and architectural role.

Three Primitives Instead of Many Contexts


The system needed to work across different industries and countries without rewriting core logic. That drove me toward a small set of flexible primitives: entities (things that exist like people and locations), events (things that happen like shifts and absences), and participations (how entities relate to events, like being a worker or trainee).

A shift is an event. An absence is an event. A training session is an event. The differences are in their configuration, their attributes, their status workflows. This makes the system adaptable without constant code changes.

But here's what changes with an agent as the primary developer. If I were implementing this myself, I'd still create a Shifts context. I'd build Shifts.create_shift() and Shifts.assign_worker() as conveniences. These domain-specific functions would help me keep track of business logic and make the codebase easier to navigate as a human.

The agent doesn't need that layer. It can work directly with Operations.execute(:create, "event", nil, params, user) and understand from the configuration that this particular event is a shift. It doesn't need separate modules for each event type to keep the domain concepts straight.

What Changes in the Architecture


The codebase looks different when an agent is the primary developer. Everything becomes introspectable. The agent reads JSON blueprints at runtime to discover what entity types, event types, and participation types exist. It doesn't need these encoded in separate modules.

Operations are consistent and generic rather than domain-specific. There's no Shift.create() or Absence.approve(). There's Operations.execute() with different parameters. Queries build from parameters instead of calling named functions. This isn't because agents struggle with multiple functions. It's because they don't need the convenience layer.

Error responses include enough structure for the agent to understand what went wrong and fix it. A validation error returns which fields failed and why. A permission error suggests what the user can do instead. A concurrency conflict returns the current state so the agent can retry with fresh data.

Documentation becomes more important, not less. Module docs, function docs, inline comments. When a human writes code they understand the context implicitly. When an agent writes code, that context needs to be explicit. Good documentation helps the agent make better decisions about edge cases and understand how pieces fit together.

The specifications explain why the system is shaped this way, not just what it does. When I write "entities can form hierarchies" or "events have status workflows", the agent understands the implications without me detailing every case. It can figure out that hierarchies need parent references and recursive queries, that status transitions need validation and cascade rules.

The Human's Job


My responsibility is the system's conceptual integrity. Getting the primitives right. Thinking through how entities, events, and participations compose to handle workforce management's full complexity across industries and jurisdictions.

I write specifications that explain these decisions. The agent uses them to implement. When it gets stuck or heads in the wrong direction, I provide corrections. I review whether the implementation matches the intended architecture, not whether it follows style guides.

The valuable skill becomes architectural thinking. Understanding how to decompose problems. Recognising what's primitive and what's derived. Seeing where flexibility matters and where constraints help. These were always important, but now they're the primary value rather than implementation ability.

What This Means


If an agent can implement most systems given clear specifications, then architecture becomes the core work. Not framework knowledge or implementation patterns, but designing systems that make sense, explaining them clearly, and verifying the implementation matches intent.

That's a different way of thinking about software development. The code quality is fine. The question is whether the architecture is sound and whether you can communicate it well enough for an agent to build correctly.

Worth exploring what that changes about how we design systems.

My AI-Supported Development Workflow

OCTOBER 14, 2025
Inspired by a16z's exploration of the AI software development stack, I've built an approach that uses multiple AI models, each for what it does best. This is how I work through projects from start to finish.

The Planning Phase


I begin every project by brainstorming with GPT-5 High, which excels at big-picture thinking and structured planning. I ask it to create a comprehensive implementation plan with a detailed todo list, complete with checkmarks, saved as a markdown file in the project root. This plan is purely for tracking the implementation process: what needs to be built and in what order.

An important part of the plan includes creating specification files. These aren't the same as the plan itself. The plan details that specification files need to be created, and those files will document the actual behaviour, architecture, and requirements of the system.

I iterate with GPT-5 High until I'm satisfied with the scope and structure. By iteration, I mean I read through the plan, make corrections where needed, and ask questions to see if it can be improved. This review process ensures I've thought through edge cases and dependencies before writing a single line of code.

Implementation with Claude


Once the plan is solid, I switch to Claude Sonnet 4.5 for implementation. The todo list is ordered by impact and effort, ensuring I tackle work in the most logical sequence. I work through it item by item, where each task represents a small piece of work that can be completed and tested in isolation.

For each task, Claude delivers three things:

1. The specification file(s) documenting what the feature does, its expected behaviour, edge cases, and architectural decisions
2. The implementation code that realizes the specification
3. Automated tests that verify the implementation matches the specification

As the code evolves, so does its documentation. The specification files aren't afterthoughts but deliverables created alongside the code. Claude only marks a task as complete when all three elements are in place and the tests pass. This keeps me focused and gives me confidence that each completed feature is properly documented and working.

The Verification Loop


After each task, I turn to GPT-5 Codex Medium for verification. It acts as a fresh pair of eyes, reviewing all three deliverables:

  • Does the specification file clearly document the feature's behaviour and decisions?
  • Does the implementation actually fulfil what the specification describes?
  • Is the implementation complete with all the necessary components in place?
  • Do the tests adequately cover the specified behaviour?

If Codex spots issues (whether in documentation clarity, missing implementation pieces, implementation correctness, or test coverage), I feed that feedback back to Claude Sonnet 4.5 for refinements. This might mean updating the specification for clarity, completing missing parts, fixing the implementation, or improving the tests.

This creates a feedback loop where the specification, implementation, and tests all stay aligned and complete before moving forwards.

A practical example: I once had Claude Sonnet 4.5 mark a task as complete with confidence. The code was written, tests passed, and the specification was thorough. But when GPT-5 Codex Medium reviewed it, it quickly spotted that whilst everything was technically done, the new code wasn't wired up yet and wasn't actually in use. Codex then generated a prompt that I fed back to Claude, which correctly explained how to wire the newly generated code into the main workflow. Without this verification step, I would have moved on thinking the feature was complete when it wasn't actually functional.

Final Quality Check


When Claude has worked through the entire implementation plan, I do one final pass with GPT-5 Codex Medium. This comprehensive review ensures all requirements are properly met, specification files accurately document the system, test coverage is adequate, and everything aligns.

At this point, I have working code, specification files that explain what was built and why, tested implementations that match those specifications, and multiple layers of verification. The plan in the root has served its purpose as an implementation tracker, whilst the specification files provide lasting documentation of the system.




Model Selection


I choose models based on what they're good at. GPT-5 High excels at complex, big tasks like initial planning and brainstorming. It can reason through architecture, dependencies, and project structure at a high level, making it the right choice for comprehensive implementation plans.

Claude Sonnet 4.5 is the best at implementing code. It translates specifications into clean, working implementations, generates thorough tests, and maintains consistency across all deliverables.

GPT-5 Codex Medium is better for faster verifications of smaller tasks. GPT-5 Codex High takes longer to think more deeply about problems, but this extended reasoning isn't needed when verifying and reviewing a single task. A faster model works better for the tight feedback loop during iterative development.

Optimising the Workflow


The tools work better when configured for this specific workflow. I maintain configuration files that finetune each model's behaviour:

  • CLAUDE.md for Claude Sonnet 4.5 - defines how it should approach implementation, testing, and documentation
  • AGENTS.md for GPT-5 Codex - configures verification patterns and review focus areas

You can see examples of these configuration files here:

These configuration files ensure each model understands its role in the workflow and follows consistent patterns across projects.




Why This Works


Each AI does what it's good at:

  • GPT-5 High: Strategic planning and implementation roadmaps
  • Claude Sonnet 4.5: Creating specification files, detailed implementation, test generation, and marking off completed tasks
  • GPT-5 Codex Medium: Verification across specifications, code, and tests

The result is cleaner code, comprehensive test coverage, proper documentation through specification files, and a collaborative development process. By treating AI models as specialist team members rather than all-purpose tools, I get quality results whilst keeping development efficient.

The separation of concerns matters. The plan tracks implementation progress, the specification files document what was built, the tests verify correctness, and multiple verification layers catch issues before they become problems. It scales from small features to complex projects.




This workflow was inspired by a16z's article on the trillion-dollar AI software development stack, which provides an excellent overview of the emerging AI coding ecosystem.

Let the Tools Do the Talking: Why Automated Code Style Enforcement is Essential for AI-Assisted Development

SEPTEMBER 3, 2025
In our previous post about building an AI toolkit for our team, we explored how AI agents are becoming integral to our development workflow. Today, let's dive into a critical foundation that makes AI-assisted development truly effective: automated code style enforcement.

The Challenge: Teaching AI Agents Your Team's Style


When AI agents contribute to your codebase, they need clear, consistent patterns to follow. Without automated enforcement, agents struggle to understand and replicate your team's style preferences. They might generate code that's functionally correct but looks foreign in your codebase—using different alias patterns, inconsistent pipe usage, or varying module organisation.

The result? Code that works but doesn't feel like it belongs, making the codebase harder to maintain and understand over time.

The Solution: Automated Style Enforcement


Tools like Credo and Dialyzer don't just catch bugs—they teach. When your codebase follows strict, automated style guidelines, AI agents learn from consistent examples throughout your project. Every function, every module, every pattern follows the same rules, creating a coherent style language that agents can understand and replicate.

This automated consistency becomes the foundation for effective AI collaboration:

1. Create Predictable Patterns for AI Agents

AI agents excel at pattern recognition. When your codebase consistently uses the same alias organisation, pipe patterns, and module structure, agents can generate code that seamlessly fits your existing style.

2. Eliminate Style Guesswork

Without clear rules, AI agents make inconsistent choices about formatting and structure. Automated enforcement removes this ambiguity, ensuring every piece of generated code follows your team's standards.

3. Scale Your Team's Style

As AI agents become more prevalent in development, automated style enforcement lets you scale your team's coding standards without manually reviewing every AI contribution.

4. Maintain Codebase Coherence

A consistent style creates a unified voice across your entire codebase, whether written by humans or AI agents.

A Practical Example: Credo Configuration


Here's how we configure Credo to enforce consistency that works well with AI-assisted development:

elixir
%{
  configs: [
    %{
      name: "default",
      strict: true,
      files: %{
        included: ["lib/", "test/"],
        excluded: ["_build/", "deps/", ".elixir_ls/", "lib/rules_engine/dsl/parser.ex"]
      },
      checks: [
        # Strict readability around pipes
        {Credo.Check.Readability.SinglePipe, []},
        {Credo.Check.Refactor.PipeChainStart, []},

        # Additional readability checks
        {Credo.Check.Readability.MultiAlias, []},
        {Credo.Check.Readability.SeparateAliasRequire, []},
        {Credo.Check.Readability.AliasAs, []}
      ]
    }
  ]
}


These specific checks ensure:
  • Consistent pipe usage: AI agents learn clear patterns for data transformation
  • Organized imports: Aliases and requires follow predictable structures
  • Readable code flow: Single pipes and proper chain starts make code easier to parse

Additional Benefits for Human Developers


Whilst the primary goal is enabling consistent AI-generated code, automated style enforcement brings several benefits for human developers too:

Streamlined Code Reviews

With style automatically enforced, PR reviews can focus on logic, architecture, and business requirements rather than formatting debates.

Reduced Cognitive Load

Developers don't need to remember style preferences—the tools handle consistency whilst they focus on solving complex problems.

Faster Onboarding

New team members can immediately understand the codebase standards through automated feedback rather than tribal knowledge.

Early Bug Detection

Tools like Dialyzer catch type inconsistencies and potential runtime errors before they reach production, creating a safety net for both human and AI-generated code.

Documentation Through Code

Consistent patterns serve as implicit documentation, making the codebase self-explaining for new contributors.

Reduced Technical Debt

Automated checks prevent style inconsistencies from accumulating over time, keeping the codebase maintainable.

Improved Refactoring Confidence

When style is consistent, large-scale refactoring becomes more predictable and less error-prone.

Better Tooling Integration

IDEs and AI assistants work more effectively with consistently formatted code, providing better autocomplete and suggestions.

Making It Work in Practice


To successfully implement automated code style enforcement:

1. Run checks in CI/CD: Make style compliance a requirement for merging
2. Integrate with editors: Provide immediate feedback during development
3. Start strict: It's easier to begin with strict rules than to tighten them later
4. Document exceptions: When you exclude files or disable checks, document why

The Bottom Line


AI agents are becoming essential development partners, but they need consistent patterns to generate code that truly fits your codebase. Automated code style enforcement creates this consistency, teaching agents your team's standards whilst ensuring every contribution—human or AI—follows the same rules.

The goal isn't just cleaner code reviews or fewer style debates. It's about creating a codebase that speaks with one voice, where AI agents can contribute meaningfully because they understand exactly how your team writes code.

Let your tools teach your agents. Let consistency drive collaboration.




What tools does your team use for automated code quality? Share your configuration strategies and experiences in the comments below.

Building an AI Toolkit for Development Teams: A Comprehensive Guide to Agent Configuration Files

SEPTEMBER 2, 2025
As AI agents become increasingly integrated into software development workflows, establishing a structured approach to agent configuration is crucial for team efficiency and consistency. This guide outlines a comprehensive system for organizing AI agent instructions through strategic file placement and content organization.

The Three-Tier Configuration System


Personal Configuration: ~/.claude/CLAUDE.md or ~/AGENTS.md


Location: A dedicated .claude directory in your home directory (for Claude-specific configs) or your home directory root (for general AI agent configs)
Purpose: Contains your individual preferences, coding style, and personal workflow requirements

Content Structure:
markdown
# Personal AI Agent Configuration

## Coding Preferences
- Language: [Primary languages and frameworks]
- Code Style: [Formatting preferences, naming conventions]
- Testing Approach: [Preferred testing frameworks and methodologies]
- Documentation Style: [Comment style, README preferences]

## Communication Style
- Response Format: [Detailed vs. concise, technical level]
- Explanation Depth: [How much context you typically need]
- Code Review Focus: [Security, performance, readability priorities]

## Development Environment
- IDE/Editor: [Your preferred development tools]
- OS: [Operating system considerations]
- Shell: [Command line preferences]
- Package Managers: [npm, yarn, pip, etc.]

## Personal Workflows
- Git practices: [Branch naming, commit message style]
- Debugging preferences: [Tools and approaches you favor]
- Deployment considerations: [Platforms you work with]


Why Personal Configuration Matters:
Personal configuration files ensure that AI agents adapt to your individual working style across all projects. This creates consistency in how agents interact with you, regardless of the project context. It's particularly valuable for maintaining your coding standards and receiving responses in your preferred format.

Project Configuration: ./AGENTS.md or ./CLAUDE.md


Location: Project root directory
Purpose: Contains project-specific context, architecture decisions, and team standards

Content Structure:
markdown
# Project AI Agent Configuration

## Project Overview
- Name: [Project name]
- Description: [Brief project description]
- Architecture: [High-level architecture overview]
- Tech Stack: [Languages, frameworks, databases]
- Target Audience: [Who uses this software]

## Development Standards
- Code Style: [Project-specific style guide]
- Testing Requirements: [Coverage expectations, test types]
- Documentation Standards: [API docs, code comments]
- Performance Requirements: [Speed, memory, scalability needs]

## Project Structure

lib/
my_app/
accounts/ # User management context
blog/ # Blog context
repo.ex # Database repository
my_app_web/
controllers/ # Phoenix controllers
live/ # LiveView modules
templates/ # EEx templates
views/ # Phoenix views
router.ex # Application routes
endpoint.ex # HTTP endpoint
config/ # Application configuration
priv/
repo/
migrations/ # Database migrations
seeds.exs # Database seeds
static/ # Static assets
test/ # Test files
my_app/ # Context tests
my_app_web/ # Web layer tests
support/ # Test helpers
deps/ # Dependencies
## Key Dependencies
- [List critical dependencies and their purposes]
- [Note any deprecated or problematic dependencies]
- [Mention version constraints]

## Domain Knowledge
- [Business logic explanations]
- [Industry-specific terminology]
- [Regulatory or compliance requirements]

## Common Tasks
- [Frequent development patterns in this project]
- [Typical debugging scenarios]
- [Deployment procedures]


Feature/Module Configuration: ./feature-name/AGENTS.md or ./feature-name/CLAUDE.md


Location: Within specific feature directories or modules
Purpose: Contains highly specific context for particular features or subsystems

Content Structure:
markdown
# Feature-Specific Agent Configuration

## Feature Overview
- Purpose: [What this feature does]
- Dependencies: [Other features/modules this relies on]
- Data Flow: [How data moves through this feature]

## Implementation Details
- Key Files: [Most important files to understand]
- Design Patterns: [Patterns used in this feature]
- External Integrations: [APIs, services, databases]

## Testing Strategy
- Test Types: [Unit, integration, e2e specific to this feature]
- Mock Data: [Where to find or how to generate test data]
- Edge Cases: [Known edge cases and how they're handled]

## Known Issues
- [Current bugs or limitations]
- [Technical debt items]
- [Future refactoring plans]


The Power of Technical Specification Files


Why Technical Specifications Matter for AI Agents


Technical specification files serve as concentrated knowledge bases that enable AI agents to understand complex codebases without exhaustive code scanning. Here's why they're invaluable:

Contextual Understanding: Rather than inferring patterns from code, agents receive explicit explanations of architectural decisions, design patterns, and business logic. This leads to more accurate suggestions and reduces the likelihood of recommendations that conflict with established patterns.

Efficiency: Instead of analysing thousands of lines of code to understand system architecture, agents can quickly reference specification files to grasp the overall structure and make informed decisions about code changes.

Consistency: Specifications ensure that AI suggestions align with established project standards and architectural decisions, maintaining code consistency across the team.

Essential Technical Specification Files


./specs/ARCHITECTURE.md
markdown
# System Architecture

## High-Level Design
[System overview with diagrams]

## Data Flow
[How data moves through the system]

## Integration Points
[External services, APIs, databases]

## Security Considerations
[Authentication, authorization, data protection]

## Scalability Approach
[How the system handles growth]


./specs/API_SPEC.md
markdown
# API Specification

## Endpoints
[Detailed endpoint documentation]

## Authentication
[How API authentication works]

## Data Models
[Request/response schemas]

## Error Handling
[Standard error responses]

## Rate Limiting
[API usage constraints]


./specs/DATABASE_SCHEMA.md
markdown
# Database Design

## Entity Relationships
[Tables and their relationships]

## Indexing Strategy
[Performance optimization details]

## Migration Approach
[How schema changes are handled]

## Data Validation
[Constraints and validation rules]


Agent Integration Benefits


When AI agents have access to these specification files, they can:

1. Generate Contextually Appropriate Code: Understanding the existing architecture ensures new code follows established patterns
2. Suggest Relevant Tests: Knowing the testing strategy helps generate appropriate test cases
3. Identify Integration Points: Understanding system boundaries helps with API integration and service communication
4. Maintain Consistency: Adhering to documented standards prevents architectural drift
5. Optimize Performance: Understanding performance requirements guides optimization suggestions

Specialized Project Agents


Defining Specialized Agent Roles


Different aspects of software development benefit from specialized AI agents configured for specific tasks:

Test Agent Configuration
markdown
# Test Agent Specialization

## Primary Responsibilities
- Generate comprehensive test suites
- Identify edge cases and boundary conditions
- Suggest test data scenarios
- Recommend testing strategies

## Configuration Focus
- Testing frameworks used in project
- Coverage requirements and standards
- Mock data generation approaches
- Performance testing considerations

## Prompt Enhancements
"When generating tests, prioritize:
1. Edge cases and error conditions
2. Integration between components
3. Performance under load
4. Security vulnerabilities
5. Backwards compatibility"


Code Review Agent Configuration
markdown
# Code Review Agent Specialization

## Primary Responsibilities
- Security vulnerability identification
- Performance optimization suggestions
- Code style and consistency checks
- Architecture compliance verification

## Configuration Focus
- Security best practices for the tech stack
- Performance benchmarks and optimization patterns
- Code quality metrics and standards
- Accessibility requirements

## Review Checklist
- [ ] Security vulnerabilities
- [ ] Performance implications
- [ ] Code maintainability
- [ ] Test coverage
- [ ] Documentation completeness


Documentation Agent Configuration
markdown
# Documentation Agent Specialization

## Primary Responsibilities
- Generate comprehensive API documentation
- Create user guides and tutorials
- Maintain architectural documentation
- Produce code comments and inline documentation

## Configuration Focus
- Documentation standards and formats
- Audience-specific writing styles
- Visual diagram generation
- Version control for documentation

## Documentation Types
- API reference documentation
- User guides and tutorials
- Architectural decision records
- Code comments and inline docs


Implementation Strategy


1. Agent Role Definition
Create specific configuration files for each specialized agent, clearly defining their scope and responsibilities.

2. Context Isolation
Ensure each specialized agent has access to relevant specification files while filtering out unnecessary information that might dilute their focus.

3. Workflow Integration
Integrate specialized agents into your development workflow at appropriate points (pre-commit hooks, code review processes, deployment pipelines).

4. Feedback Loops
Establish mechanisms for agents to learn from team feedback and improve their specialized knowledge over time.

Best Practices for Implementation


Getting Started


1. Begin with Personal Configuration: Start by creating your personal agent configuration file to establish individual preferences
2. Establish Project Standards: Work with your team to define project-level configuration standards
3. Iterate and Refine: Continuously update configuration files based on agent performance and team feedback
4. Document the Process: Create team guidelines for maintaining and updating agent configuration files

Maintenance Guidelines


Regular Updates: Schedule regular reviews of configuration files to ensure they remain current with project evolution.

Version Control: Treat agent configuration files as part of your codebase, with proper version control and change tracking.

Team Alignment: Ensure all team members understand and contribute to agent configuration maintenance.

Performance Monitoring: Track how agent configurations impact development velocity and code quality.

Conclusion


Implementing a structured AI agent toolkit through thoughtful configuration files transforms AI from a generic assistant into a specialized team member that understands your codebase, follows your standards, and contributes meaningfully to your development process. By establishing clear configuration hierarchies, maintaining comprehensive technical specifications, and deploying specialized agents, development teams can significantly enhance their productivity while maintaining code quality and consistency.

The investment in setting up these configuration files pays dividends through improved AI suggestions, reduced onboarding time for new team members, and more consistent development practices across projects. As AI continues to evolve, teams with well-structured agent configurations will be better positioned to leverage new capabilities effectively.
Henricus is a backend software developer with a passion for complex systems and elegant code architecture. Specializing in Elixir and Rust, he thrives on solving challenging technical problems and untangling intricate backend logic, though he openly admits frontend development isn't his forte. When he's not deep in code, Henricus enjoys analog and digital photography and considers being a dad his greatest achievement and defining characteristic.