All Projects
| ai-ml
Auto‑Blog Agent: Automating Weekly GitHub‑to‑LinkedIn Content
Every week, the agent scans your public repos, reads the README, asks an LLM to craft a blog post and LinkedIn copy, and publishes them—no manual effort required.
pythonopen-sourcellmautomationsupabasegithublinkedinasync
✓ 12 blog posts and 12 LinkedIn posts per week
Architecture Overview
graph LR
A[GitHub Actions] --> B[Python Agent]
B --> C{Sync Phase}
C --> D[Fetch Repos]
C --> E[Get Week]
C --> F[Get Processed]
B --> G[Async Phase]
G --> H[Process Repo]
H --> I[LLM Call]
H --> J[Save Blog]
H --> K[Save Project]
H --> L[Post LinkedIn]
H --> M[Mark Processed]
Core Components
- agent.core.github – Uses
httpxto call the GitHub REST API. It fetches public repos, filters by creation date, and extracts the README. The README is truncated toconfig.github.max_readme_lengthcharacters to keep the prompt size reasonable. - agent.core.llm – Wraps OpenRouter
httpxcalls. It sends a system prompt and user prompt, then validates the JSON with Pydantic v2. If validation fails it retries with the fallback model. - agent.core.database – A thin wrapper over
supabase-py. All read/write operations are synchronous; we useasyncio.to_threadto keep the event loop unblocked. - agent.core.linkedin – Posts to LinkedIn via the ugcPosts API. Errors are caught and the raw LLM output is stored in
agent_processed_repos.raw_llm_outputfor manual recovery.
Async Strategy
async def _process_repo(repo):
readme = await to_thread(fetch_readme, repo)
llm_output = await to_thread(llm_call, readme)
blog = BlogPostInsert.from_llm_output(llm_output)
project = ProjectInsert.from_llm_output(llm_output)
await to_thread(save_blog, blog)
await to_thread(save_project, project)
try:
await to_thread(post_to_linkedin, llm_output.linkedin_post)
except Exception as e:
log.error("LinkedIn post failed", exc_info=e)
await to_thread(mark_repo_processed, repo.id, status="success")
Trade‑offs
- Using
httpxsync client keeps dependencies minimal but forcesto_thread. The overhead is negligible for a handful of repos. - The JSON contract guarantees deterministic downstream processing. It also makes unit tests trivial.
- The audit table records every repo even in dry‑run mode; you need to clear it before a real run.
Future Work
Adding a retry back‑off for LinkedIn, a cache for LLM responses, and a UI to trigger manual runs would make the agent even more robust.