How to Use Hourly Automations in Claude Code to Process Your Knowledge Base
Set up hourly automations in Claude Code to process new files, update your wiki, and push changes to GitHub—all without manual intervention.
What Hourly Automations in Claude Code Actually Do
If you manage a knowledge base — whether it’s a team wiki, a Notion export, a folder of Markdown files, or a collection of research docs — you already know the problem. Files pile up. Notes go unprocessed. Your wiki drifts out of sync with reality. And the manual work of organizing, summarizing, and committing updates never gets prioritized.
Claude Code’s automation capabilities let you fix this by setting up recurring jobs that process new files, update your knowledge base, and push changes to GitHub on a schedule — without you lifting a finger after the initial setup.
This guide walks through exactly how to configure hourly automations in Claude Code to keep your knowledge base current. You’ll learn how to structure your project, write the automation logic, schedule it reliably, and handle edge cases that trip people up.
Prerequisites and Setup
Before configuring the hourly automation, make sure you have the following in place.
What You’ll Need
- Claude Code installed and authenticated (available via Anthropic’s CLI)
- Node.js 18+ or Python 3.10+ depending on your scripting preference
- A Git repository for your knowledge base
- A folder structure that separates raw/unprocessed files from processed ones
- Basic familiarity with cron syntax or your OS’s task scheduler
Recommended Folder Structure
Keep your knowledge base organized from the start. A clean structure makes the automation logic simpler and reduces the chance of reprocessing files you’ve already handled.
knowledge-base/
├── inbox/ # New files drop here
├── processed/ # Files after automation runs
├── wiki/ # Updated wiki pages
├── summaries/ # Auto-generated summaries
├── logs/ # Automation run logs
└── scripts/ # Your automation scripts
- ✕a coding agent
- ✕no-code
- ✕vibe coding
- ✕a faster Cursor
The one that tells the coding agents what to build.
The inbox/ folder is the entry point. Anything dropped there gets picked up by the hourly automation. The processed/ folder gets a copy after the file has been handled, so you have an audit trail.
Writing the Core Automation Script
The heart of this setup is a script that Claude Code runs on a schedule. It checks for new files, processes them using Claude’s API, updates the wiki, and commits the changes.
Step 1 — Set Up the Script File
Create a file at scripts/process-knowledge-base.js (or .py if you prefer Python). This script will do four things:
- Scan
inbox/for new files - Send each file’s content to Claude for processing
- Write the output to
wiki/ - Stage and commit the changes to Git
Here’s the Node.js version:
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
const Anthropic = require('@anthropic-ai/sdk');
const client = new Anthropic();
const INBOX = path.join(__dirname, '../inbox');
const PROCESSED = path.join(__dirname, '../processed');
const WIKI = path.join(__dirname, '../wiki');
const LOGS = path.join(__dirname, '../logs');
async function processFile(filePath) {
const content = fs.readFileSync(filePath, 'utf8');
const fileName = path.basename(filePath, path.extname(filePath));
const response = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 2048,
messages: [
{
role: 'user',
content: `You are a knowledge base editor. Process the following document and return a clean, well-structured wiki entry in Markdown. Include a summary, key concepts, and any action items. Document:\n\n${content}`
}
]
});
const wikiEntry = response.content[0].text;
const wikiPath = path.join(WIKI, `${fileName}.md`);
fs.writeFileSync(wikiPath, wikiEntry);
// Move original to processed
const processedPath = path.join(PROCESSED, path.basename(filePath));
fs.renameSync(filePath, processedPath);
return wikiPath;
}
async function run() {
const timestamp = new Date().toISOString();
const logFile = path.join(LOGS, `run-${Date.now()}.log`);
const logLines = [`[${timestamp}] Automation started`];
const files = fs.readdirSync(INBOX).filter(f => !f.startsWith('.'));
if (files.length === 0) {
logLines.push('No new files found. Exiting.');
fs.writeFileSync(logFile, logLines.join('\n'));
return;
}
logLines.push(`Found ${files.length} file(s) to process`);
const updatedFiles = [];
for (const file of files) {
const filePath = path.join(INBOX, file);
try {
const wikiPath = await processFile(filePath);
logLines.push(`Processed: ${file} → ${wikiPath}`);
updatedFiles.push(wikiPath);
} catch (err) {
logLines.push(`ERROR processing ${file}: ${err.message}`);
}
}
// Git commit
if (updatedFiles.length > 0) {
execSync('git add wiki/ processed/', { cwd: path.join(__dirname, '..') });
execSync(`git commit -m "Auto-update: processed ${updatedFiles.length} file(s) [${timestamp}]"`, {
cwd: path.join(__dirname, '..')
});
execSync('git push origin main', { cwd: path.join(__dirname, '..') });
logLines.push(`Git: committed and pushed ${updatedFiles.length} wiki update(s)`);
}
fs.writeFileSync(logFile, logLines.join('\n'));
}
run().catch(console.error);
Step 2 — Install Dependencies
npm install @anthropic-ai/sdk
Make sure your ANTHROPIC_API_KEY is set as an environment variable. You can add it to your shell profile or a .env file (use dotenv if you go the .env route).
Step 3 — Test the Script Manually
Before scheduling, drop a test file in inbox/ and run the script directly:
node scripts/process-knowledge-base.js
Check that:
- The file disappeared from
inbox/and appeared inprocessed/ - A corresponding
.mdfile was created inwiki/ - Your Git log shows a new commit
If all three happen, you’re ready to schedule.
Scheduling the Automation to Run Hourly
Once the script works manually, the next step is making it run on its own every hour.
On macOS and Linux — Using Cron
Open your crontab:
crontab -e
Add this line to run the script at the top of every hour:
0 * * * * /usr/local/bin/node /path/to/knowledge-base/scripts/process-knowledge-base.js >> /path/to/knowledge-base/logs/cron.log 2>&1
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
Replace /path/to/knowledge-base/ with your actual path. The >> logs/cron.log 2>&1 part appends both stdout and stderr to a log file, which is useful for debugging.
To verify cron has the right Node path:
which node
Use that full path in the crontab entry.
On Windows — Using Task Scheduler
- Open Task Scheduler from the Start menu.
- Click Create Basic Task.
- Name it something like “Knowledge Base Automation.”
- Set the trigger to Daily, then adjust the recurrence to run hourly by adding additional triggers or using the Repeat task every 1 hour option under Advanced settings.
- Set the action to Start a Program, pointing to
node.exewith your script path as the argument.
Using GitHub Actions for Cloud-Based Scheduling
If your knowledge base lives in a GitHub repository, you can skip local scheduling entirely and use GitHub Actions.
Create .github/workflows/process-knowledge-base.yml:
name: Process Knowledge Base
on:
schedule:
- cron: '0 * * * *' # Every hour
workflow_dispatch: # Allow manual triggers
jobs:
process:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install
- name: Run automation
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: node scripts/process-knowledge-base.js
- name: Push changes
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add wiki/ processed/ logs/
git diff --cached --quiet || git commit -m "Auto-update knowledge base [$(date -u)]"
git push
Add your ANTHROPIC_API_KEY as a repository secret under Settings → Secrets and variables → Actions.
This approach is cleaner for distributed teams — no one needs to leave a machine running, and the automation runs reliably in the cloud.
Customizing the Processing Logic
The basic script summarizes files and creates wiki entries. But you can tailor what Claude does with each file based on your needs.
Processing Different File Types
Not all files should be treated the same way. A meeting transcript needs different processing than a technical spec.
function buildPrompt(content, fileName) {
if (fileName.includes('meeting') || fileName.includes('transcript')) {
return `Extract action items, decisions made, and attendees from this meeting transcript. Format as a structured wiki entry.\n\n${content}`;
}
if (fileName.includes('spec') || fileName.includes('rfc')) {
return `Summarize this technical specification. Include: purpose, key requirements, open questions, and status.\n\n${content}`;
}
if (fileName.includes('research') || fileName.includes('notes')) {
return `Organize these research notes into a clean wiki entry with key findings, sources, and follow-up questions.\n\n${content}`;
}
return `Convert this document into a clean, well-organized wiki entry in Markdown.\n\n${content}`;
}
Updating an Existing Wiki Index
Rather than creating isolated files, you might want the automation to maintain a master index. After processing all files, have the script regenerate the index:
async function updateWikiIndex() {
const wikiFiles = fs.readdirSync(WIKI).filter(f => f.endsWith('.md') && f !== 'index.md');
const entries = wikiFiles.map(file => {
const name = file.replace('.md', '');
return `- [${name}](${file})`;
});
const indexContent = `# Knowledge Base Index\n\nLast updated: ${new Date().toISOString()}\n\n${entries.join('\n')}`;
fs.writeFileSync(path.join(WIKI, 'index.md'), indexContent);
}
Call updateWikiIndex() at the end of your run() function.
Tagging and Categorizing Entries
You can ask Claude to return structured metadata alongside the wiki content:
const response = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 2048,
messages: [
{
role: 'user',
content: `Process this document. Return JSON with two keys: "tags" (array of relevant topic tags) and "content" (full wiki entry in Markdown).\n\n${content}`
}
]
});
const result = JSON.parse(response.content[0].text);
// Use result.tags for categorization, result.content for the wiki file
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
Handling Edge Cases and Errors
Scheduled automations fail silently if you’re not careful. A few patterns will save you a lot of debugging.
Avoiding Duplicate Processing
If your automation crashes mid-run, files might end up in a broken state. Add a simple lock file to prevent concurrent runs:
const LOCK_FILE = path.join(__dirname, '../.automation.lock');
function acquireLock() {
if (fs.existsSync(LOCK_FILE)) {
const lockAge = Date.now() - fs.statSync(LOCK_FILE).mtimeMs;
if (lockAge < 10 * 60 * 1000) { // 10 minutes
throw new Error('Another instance is running');
}
fs.unlinkSync(LOCK_FILE); // Stale lock — remove it
}
fs.writeFileSync(LOCK_FILE, String(Date.now()));
}
function releaseLock() {
if (fs.existsSync(LOCK_FILE)) fs.unlinkSync(LOCK_FILE);
}
Wrap your run() function with acquireLock() at the start and releaseLock() in a finally block.
Rate Limiting and Large Files
If your inbox gets a large batch of files, you might hit API rate limits. Add a small delay between requests:
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
for (const file of files) {
await processFile(filePath);
await sleep(1000); // 1 second between requests
}
For files over ~50KB, consider chunking the content or extracting only the most relevant sections before sending to Claude.
Git Conflicts
If multiple people are committing to the same repo, the push step can fail. Handle this gracefully:
try {
execSync('git push origin main', { cwd: repoRoot });
} catch (err) {
// Pull and retry
execSync('git pull --rebase origin main', { cwd: repoRoot });
execSync('git push origin main', { cwd: repoRoot });
}
Extending with MindStudio’s Agent Skills Plugin
The automation above handles the core loop well. But if you want to extend what happens when a file gets processed — sending a Slack notification, logging to a spreadsheet, triggering a downstream workflow — wiring each integration manually gets tedious fast.
This is where MindStudio’s Agent Skills Plugin fits in cleanly. It’s an npm SDK (@mindstudio-ai/agent) that gives Claude Code (or any AI agent) access to 120+ pre-built capabilities as simple method calls. Instead of building each integration yourself, you call agent.sendSlackMessage() or agent.addGoogleSheetsRow() directly from your automation script.
const { MindStudioAgent } = require('@mindstudio-ai/agent');
const agent = new MindStudioAgent();
// After processing a file:
await agent.sendSlackMessage({
channel: '#knowledge-base',
text: `New wiki entry added: ${fileName}`
});
await agent.addGoogleSheetsRow({
spreadsheetId: 'your-sheet-id',
sheetName: 'Processing Log',
row: [fileName, timestamp, 'processed']
});
The SDK handles auth, retries, and rate limiting for every integration, so your script stays focused on the logic that matters. You can try MindStudio free at mindstudio.ai.
If you’d rather build the entire automation visually — including the scheduling, file handling, and Claude integration — MindStudio also supports autonomous background agents that run on a schedule without any local setup. That’s worth knowing if your team isn’t comfortable managing cron jobs or GitHub Actions workflows.
Monitoring Your Automation Over Time
Setting it and forgetting it is the goal, but you still need visibility into what’s running.
What to Log
Your logs should capture at minimum:
- Timestamp of each run
- Number of files found and processed
- Any errors, including which file caused them
- Git commit SHA after a successful push
Structured JSON logs are easier to query if you ever want to pipe them into a monitoring tool:
const logEntry = {
timestamp: new Date().toISOString(),
filesFound: files.length,
filesProcessed: updatedFiles.length,
errors: errorList,
commitSha: execSync('git rev-parse HEAD').toString().trim()
};
fs.appendFileSync(
path.join(LOGS, 'automation.jsonl'),
JSON.stringify(logEntry) + '\n'
);
Setting Up Alerts
For critical failures — like the script crashing entirely — you want a notification. The simplest option is a wrapper script that emails you if the main script exits with a non-zero code.
On GitHub Actions, you can add a step that only runs on failure:
- name: Notify on failure
if: failure()
run: |
curl -X POST ${{ secrets.SLACK_WEBHOOK }} \
-H 'Content-Type: application/json' \
-d '{"text": "Knowledge base automation failed. Check the Actions log."}'
FAQ
Can Claude Code run automations without manual triggers?
Yes. Claude Code’s scripting capabilities work with any standard scheduler — cron on Linux/macOS, Task Scheduler on Windows, or GitHub Actions for cloud-based execution. Once a script is written and a schedule is set, it runs without manual intervention. The key is writing a robust script that handles errors gracefully so it doesn’t silently fail.
How do I prevent the automation from reprocessing files it already handled?
The most reliable approach is moving files out of the inbox/ folder after processing — either deleting them or moving them to a processed/ directory. You can also maintain a simple text file that logs filenames or hashes of already-processed files, and check against that list at the start of each run.
What file types can Claude process in a knowledge base automation?
Claude can handle plain text, Markdown, JSON, CSV, and any other text-based format directly. For PDFs, Word documents, or other binary formats, you’ll need a preprocessing step to extract the text first. Libraries like pdf-parse (Node.js) or pypdf (Python) work well for this. Once the content is a string, Claude can process it normally.
How much does it cost to run hourly Claude automations?
Cost depends on how many files you process each hour and how large they are. Claude Haiku is significantly cheaper than Opus for high-volume, simpler processing tasks — good for summaries and structured extraction. For a typical knowledge base with a few files per day, the API cost is negligible. If you’re processing hundreds of files hourly, model selection and content truncation become more important.
Is it safe to give an automated script write access to a Git repository?
Yes, with proper safeguards. Use a dedicated service account or deploy key with write access only to the specific repository — not your personal credentials. For GitHub Actions, the default GITHUB_TOKEN is scoped to the repository and expires after each run. For local cron jobs, use SSH keys rather than HTTPS credentials stored in plaintext.
How do I handle files that Claude can’t process correctly?
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Build in error isolation at the file level. Process each file in its own try/catch block, log the error with the filename, and continue to the next file. Don’t let one bad file crash the entire run. For consistently problematic files, you can add them to a skip list or move them to a quarantine/ folder for manual review.
Key Takeaways
- Claude Code hourly automations work by combining a well-structured script with a reliable scheduler — cron, Task Scheduler, or GitHub Actions.
- The core loop is simple: scan for new files, send content to Claude, write the output to your wiki, commit, and push.
- Building in lock files, error isolation, and per-file logging makes the automation resilient enough to run unattended.
- GitHub Actions is the cleanest scheduling option for team knowledge bases — no local dependencies, built-in secret management, and automatic Git integration.
- For extending the automation with Slack, Sheets, or other integrations, MindStudio’s Agent Skills Plugin saves you from building each connection manually.
If you’re looking to go further — building out full workflow automations around your knowledge base without managing infrastructure — MindStudio is worth exploring. You can connect Claude, your file sources, and downstream tools in a visual builder and run everything on a schedule without a single cron job.