AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
7h ago · 6 min read · The Free AI Stack: GLM-5.1 + Gemini + Claude Without Paying a Dollar Let me cook: 🔥 I run a production SaaS backend (multi-tenant SaaS platform) with three AI providers running in parallel. GLM-5.1 for coding, Gemini for research, Claude for complex...
Join discussion
3h ago · 6 min read · Scatter/Gather I/O in Linux: A Deep Dive into readv() and writev() Efficient data transfer is a cornerstone of high-performance systems programming. Linux provides powerful mechanisms to optimize inpu
Join discussion
5h ago · 7 min read · Introduction: Bridging Cloud Power and Local Control In the hyper-accelerated world of AI-assisted engineering, developers face a critical dilemma: intelligence vs. privacy. How do you leverage the el
Join discussion
17m ago · 11 min read · Validation is one of the first architectural decisions a team makes when building ASP.NET Core Minimal APIs — and one of the most consequential. Get it wrong early and you end up with scattered valida
Join discussion1h ago · 27 min read · TLDR: Partitioning splits one logical table into smaller physical pieces called partitions. The database planner skips irrelevant partitions entirely — turning a 30-second full-table scan into a 200ms single-partition read. Range partitioning is best...
Join discussion2h ago · 2 min read · Read the original article GPUs Just Got 6x More Valuable — Nate's Substack Summary Main Thesis The real competitive advantage in the AI infrastructure war isn't faster chips or bigger fabs — it's a compression algorithm. A paper published by Google R...
Join discussion
Building, What Matters....
2 posts this monthAPEX, ORDS & the Oracle Database
1 post this monthObsessed with crafting software.
4 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthBuilding, What Matters....
2 posts this monthAPEX, ORDS & the Oracle Database
1 post this monthObsessed with crafting software.
4 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthMost are still shipping “AI add-ons.” The real shift happens when the whole workflow disappears into one action — that’s when users actually feel the value.
The OWASP LLM risks become even more critical when you consider that AI coding agents now have shell access and can modify files directly. Prompt injection isn't just a chatbot problem anymore — it's a supply chain risk when an agent reads untrusted input (like a GitHub issue body) and executes code based on it. Two practical mitigations I've found effective: 1) Sandboxing agent execution so it can't access credentials or production systems, and 2) Using pre-commit hooks that scan for common patterns like hardcoded secrets or suspicious shell commands in AI-generated code. Claude Code's hook system supports this natively, which helps enforce security gates in the CI pipeline automatically.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
This really clicked for me. It kind of reminds me of how backend systems moved away from one big service into smaller pieces that each do one thing well. Also the idea that context is something you have to manage instead of just keep adding to it… that changes how you approach the whole thing. Feels like the hard part isn’t prompting anymore, but how you structure everything around it.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Great breakdown of a decision most teams get wrong by defaulting to whatever's trending. The key insight people miss: BFF isn't an alternative to API Gateway — they solve different problems at different layers. API Gateway handles cross-cutting concerns (auth, rate limiting, routing) while BFF handles client-specific data shaping. You can absolutely run both. Where GraphQL fits depends on your team's query complexity — if your frontend needs to fetch deeply nested, variable-shape data across multiple domains, GraphQL shines. But if you're mostly doing CRUD with predictable payloads, a BFF with REST is simpler to cache, easier to debug, and doesn't require the schema stitching overhead. The real question should be: how many distinct clients are consuming your API? One client = REST is fine. Three+ clients with wildly different data needs = that's where BFF or GraphQL earns its complexity budget.
I keep running into the same issue in almost every API project I work on: 👉 The API works 👉 The tests pass 👉 But the documentation is already outdated And the bigger the system gets (microservices,
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API retur...
one tool missing from most API doc discussions: MCP (Model Context Protocol) server definitions. if you're building APIs that AI agents cons...