During my university days, I avoided building CRUD apps, not because I couldn’t, but because they felt boring. To me, it was just a client (web or mobile) talking to a server that ran some SQL queries. I dodged them in every project I could. Fast forward a three years, after graduating and working at Oracle, I’ve realized that almost everything in software is just a fancy CRUD app. And you know what? I’ve actually started to enjoy it. Things like designing infrastructure, managing database migrations, handling files, and keeping the architecture clean have become way more interesting to me. Writing SQL and executing it in the backend is just one small part of the bigger picture.

In the enterprise world, though, everything runs on standards. We’ve got tools to generate boilerplate code, infrastructure is mostly automated, and a lot of functionality is already baked into internal libraries. It’s great for productivity, but it made me wonder, could I still build something entirely from scratch.

For years, I had this little web app I built for myself. It worked fine at first, but as time went on, I needed more features, and it just couldn’t keep up. The app was built using the framework RedwoodJS, where a lot of the server-side stuff was hidden, and automatically deployed and managed. It was ok, but I always had this itch to rebuild it, this time with full control of the backend, more features, better infrastructure, and a solid API and client.

So, I decided to take the plunge and rebuild it in Go. Now, I wasn’t exactly a Go expert, but I figured it was a good chance to learn. My main goal was to take everything I’d learned over the years as a “professional” developer and put it into practice.

Even though I knew the basics of Go, like its syntax, error handling, and how to use goroutines, I wasn’t sure how to properly structure a solid application. To figure it out, I grabbed Alex Edwards' book “Let’s Go Further” and started tweaking its ideas to fit my project.

After writing around 7,000 lines of Go code (which could easily grow to 11,000 with unit tests) and over 100 commits, I realized that I had essentially created a cookiecutter template for building REST APIs. It included a wide range of features, integrations with external tools, and a solid foundation for any API project. So, I decided to take it a step further, turn it into an actual cookiecutter template and publish it on GitHub for others to use.

Lessons Learned

If I were to build another API in Go, or any other language, here are the key takeaways I’d carry forward:

1. Start with a spec

Requirements can get tricky. People might say they just want an app that does “X,” but making “X” happen usually means dealing with a bunch of other stuff too. That’s why starting with a spec is so important, it’s like a blueprint for your API. It lays out what requests your API should handle, what responses it’ll send back, and all the endpoints with their HTTP methods. Having this upfront makes sure everyone’s on the same page and avoids headaches later on.

I used OpenAPI 3.0 to build the spec, and honestly, it’s a game-changer. The coolest part? It lets you generate code with tools like Swagger. Since the spec is written in YAML and works with any language, it’s like having a universal blueprint for your API. You can use it to auto-generate models for handling requests or even create client libraries to talk to your service. This saves you a ton of time and spares you from dealing with repetitive boilerplate code. Plus, fewer manual edits mean fewer chances to mess things up.

But it’s not entirely magic, especially on the API side. Once the models are generated, you still need to manually define the routes, map them to the spec, and ensure that the requests you’re receiving and the responses you’re sending align with what’s outlined in the spec. This step is crucial to maintain consistency and avoid unexpected issues down the line.

I built a React app for my project, and honestly, the generated clients felt like magic. But when I tried using the typescript-fetch code generator, it was a total headache, it just didn’t work right out of the box. After messing around with it for a while, I decided to switch to typescript-axios. Sure, it added another dependency to the project, but it worked perfectly from the get-go and saved me a ton of time and frustration.

I know developers are lazy, specially when it’s about designing and documentating but believe when I say that once this is done your drive to finish the project is bigger that usual.

2. Authentication is not that hard

Here’s a straightforward approach: when a user registers, store their email and a securely hashed password in the database. For login, validate the email and password, then generate a token (e.g., a JWT) that encodes the user ID. This token is sent back to the client and included in subsequent requests. On the server, use the user ID from the token to verify permissions and validate resource ownership.

Honestly, you don’t need fancy services like Clerk, Firebase, or Supabase to handle authentication. For most projects, rolling your own is totally doable and a great learning experience. Sure, third-party services can make things easier, especially if you need something like social logins or multi-factor authentication. But for a lot of use cases, keeping it simple and doing it yourself works just fine, and you’ll learn a ton in the process.

3. Just write SQL

I’ve used ORMs before, and while they can be helpful, they come with their own set of headaches. The biggest issue? They add this extra layer of abstraction that can sometimes mess things up. I remember this one time at work when an ORM-generated query ended up locking way more rows than it should have during an update. It caused a chain reaction of internal errors and brought the whole system to its knees. Not fun.

But honestly, the real pain for me isn’t even about performance, it’s the developer experience. Trying to figure out how to write complex queries, especially with many-to-many relationships, feels like pulling teeth. I’d spend more time digging through ORM docs than actually solving the problem I was working on.

Looking back, I realized I was stressing over something that almost never happens, switching database vendors. In all my years of coding, I’ve never had to move from Postgres to something else, and honestly, I don’t see why I ever would. Writing raw SQL just gives you so much more control. It’s easier to debug, tweak, and handle tricky queries without an ORM getting in the way. Sometimes, keeping it simple with the basics is the way to go.

4. Avoid DRY

When you’re introduced to the concept of DRY (Don’t Repeat Yourself), it seems like a no-brainer, why maintain duplicate code when you can consolidate it? However, I’ve come to realize that this is some kind of premature optimization.

You shouldn’t overdo it unless the code becomes unmanageable. Over-optimizing for DRY can lead to a loss of flexibility and an obsession with saving as much code as possible. Early in development, it might seem like a great idea, everything looks clean and efficient. But as your project grows beyond 3,000 lines and requirements evolve, you’ll encounter edge cases in those reusable functions that seemed perfect at first. That’s when you might start regretting the overly rigid architecture you built.

That said, creating a spec for my API is, in a way, an implementation of DRY. By defining the spec, I avoided manually creating models for parameters, requests, and responses, and even allowed me to generate entire client libraries. So figure out how much DRY you want to implement in your project.

5. File uploading can be a mess

For my application, users can upload images that must remain private and inaccessible to other users. This meant I couldn’t use public buckets and store the URLs. Instead, I implemented a secure workflow: when a user uploads an image, it is converted to PNG format, stored in a private bucket, and its object key is saved in the database.

When someone requests an image, the app downloads it from the private bucket, converts it to Base64, and sends it back through the API. This way, the images stay secure and aren’t directly exposed. This approach ensures that images are securely stored and only accessible through controlled application logic, maintaining user privacy and security. Sure, presigned URLs are a popular option, but since these images are sensitive, I didn’t want to risk even a small window of unauthorized access.

Handling file uploads in Go can be tricky. One of the challenges I faced was dealing with the generated code for form data, which used a **os.File data type. This created complications since HTTP requests typically use FormFile. Managing temporary files, converting formats (e.g., from JPG, HEIC, or GIF to PNG), resetting the file pointer after processing, and ensuring files were properly closed outside their respective functions is complicated.

This lesson might be specific to my application, but it’s a valuable reminder about handling images on the server.

6. Log smarter, not harder

Logging is a balancing act: it should provide enough detail to debug issues effectively, but not so much that it overwhelms your system or racks up unnecessary costs, especially with services like AWS CloudWatch.

Here is what I found most useful when logging

  • Include essential fields like level, time, requestId, message, and any additional properties relevant to the context. JSON logs are easier to parse and integrate with modern logging tools.
  • Assign a unique identifier to each incoming request.
  • Instead of logging errors in every function, log them right before sending the response to the user.
  • Capture the parameters of incoming requests, the database query you’re about to execute, and the query parameters.

7. Create a good local testing environment

Most applications depend on a database, but provisioning one in a cloud environment for local testing can be costly. A more practical approach is to use a local database on your machine to fully test your app.

The setup I found most effective involves containerizing your API and using a docker-compose.yaml file to orchestrate both the API container and a database container. This includes:

  1. Create both the API and database containers simultaneously.
  2. Use a script to initialize the database and apply migrations locally.
  3. Ensure the app waits for a database health check before starting.

While setting this up may take some time initially, it significantly speeds up the testing process and ensures a smoother development workflow, and it’s basically free.

8. Unit tests are overrated

I’ve noticed over the years that Test-Driven Development (TDD) can sometimes give developers a false sense of confidence about their features. With unit tests, what are you really testing? Often, you’re mocking or faking the critical parts of your code, essentially telling it what to return without actually executing the real logic.

In my experience, unit tests are only truly useful for isolated functions that have no dependencies, like parsing strings or extracting information. For anything more complex, they can fall short.

Instead, use your local testing environment you’ve set up and focus on writing good integration tests. By actually calling your local API and testing the full flow, you gain far more confidence that your application works as intended. Integration tests ensure that new features function correctly and that existing functionality remains intact.

If you’re practicing TDD, consider shifting your focus from unit tests to integration tests. This approach provides a more realistic and reliable way to validate your application.

9. Make your infrastructure deployable across environments

Now that you have a solid local testing environment and reliable integration tests, you might feel ready to deploy your code to your infrastructure. However, it’s crucial to have a dedicated development or staging environment where you can run integration tests before pushing to production. This step helps catch issues that might only surface in a cloud environment, such as policy or permission errors.

Using an Infrastructure as Code (IaaC) tool like Terraform can make managing your infrastructure much easier. It allows you to track changes, replicate environments, and maintain consistency across deployments. To ensure smooth testing, create separate environments for development, staging, and production. This can be as simple as adding a prefix or suffix to resource names or as complex as managing entirely separate accounts for each environment.

One thing to keep in mind: managed databases on AWS can get expensive. Consider using other options like Turso, Railway, Neon, or Planetscale.

10. AI code assistants are kinda awesome

At first, I didn’t buy into all the hype about AI replacing programmers. Back in college, I was one of those people who thought programming was this super creative thing, almost like an art form. I figured there was no way an AI could ever match the way humans think and solve problems.

But then GitHub Copilot became free, and I decided to give it a try. I hadn’t used it much until I was working on the web app for this project. I encountered a situation where I needed to refactor some code to support additional input types. Instead of manually renaming symbols, I noticed the Copilot option when I right-clicked. Curious, I selected the “fix” option, and to my surprise, it refactored the code almost perfectly, about 90% of the way there. I only had to make a couple of minor adjustments.

That moment was a game-changer for me. It completely shifted my perspective on AI code assistants. From that point on, I started using GitHub Copilot throughout the project, and it became an invaluable tool in my workflow.

And yeah, code is rarely perfect on the first try, or even close. But tools like Copilot can significantly speed up a developer’s workflow. Once I had implemented how updates would work for one API resource, I asked Copilot to replicate the same logic across the other resources. It wasn’t flawless, there were a few minor bugs, but they were easy to spot and fix.

I don’t believe tools like Copilot will replace programmers anytime soon. You still need a solid understanding of your project’s architecture, a clear idea of what needs to be done, and the technical knowledge to define function signatures and guide the implementation. Copilot doesn’t replace that expertise, instead, it amplifies it.

What really stood out to me was how productive I felt using it. Since I already had integration tests in place to verify that everything was working as expected, I could confidently rely on Copilot to handle repetitive tasks and focus my energy on the more complex parts of the project.

Conclusion

Creating something from scratch is incredibly rewarding, I found the process of building this far more enjoyable than simply using it.

If you’re a developer, I’d say give building your own API from scratch a shot. Skip the fancy libraries and focus on nailing the basics. Once you’ve got a solid setup and feel good about it, why not turn it into a cookiecutter template? You already have most of the code setup and probably you can reuse it in any project, and also it’s a great way to share your work and help others kickstart their projects.

And finally here is a link to the go-api-cookiecutter on GitHub. Feel free to check it out, fork it, and customize it to fit your needs.