This is my way to ship bits, push pixels and jiggle electrons. I hope it helps for those of you interested in making full stack apps and/or bots while hosting on a VPS. If you’ve never made a website don’t panic. First things, you will need to learn HTML, CSS and JavaScript. They’re the basics. After you learn them, now it depends on what you want to make and where to host it.

For me, I like making websites and apps to provide info, about anything, it doesn’t matter, e.g. workouts, hydroponics or travel tips. I use a mix of ads, one time payments and/or monthly subs to cover costs and get paid. Make an app with useful data or a complete slopfest or an app that tells you you’re the best around, no one’s ever gonna bring you down.

What’s your tech stack?

I’m not immersed in the process like software engineers are all the time. I’m focused on the result. I want to code to make the thing, not be in awe of the code itself. This is just my approach to development.

Here’s a breakdown of my tech stack:

I write JavaScript, PHP & Bash and host everything on Linux VMs. For my blog? Jekyll. It statically generates Markdown pages with my own Liquid templates and TailwindCSS styles. I write inside of Obsidian.

Full Stack (Web & Mobile): Nuxt.js, Vue.js, Node.js, TailwindCSS, Linux, Nginx, Capacitor/Ionic, and MongoDB

Bots: PHP, jQuery and MongoDB (w/ PHP Libraries)

Here’s an overview of the key technologies in a table, it might make things easier.

Category Technology/Language/Service
Domain Hosting Cloudflare || NameCheap
Hosting Linode VPS
Front-end HTML5, TailwindCSS, JavaScript
Data & Async UseFetch/UseState
APIs (RESTFul) Payments & Emails
Server OS Debian/Ubuntu Linux
Server/Reverse Proxy Ngnix server with Proxy
Runtime NVM Node.js runtime
Back-end Node.js || PHP
HTTP Web Framework Node Adapter || PHP-FPM
Database MongoDB or SQLite
DB Type NoSQL || SQL
Documentation JSDoc comments
Storage Linode S3 Object Storage Cluster
Backups Linode Fully Managed Daily Backup
Analytics Google Analytics
Scripting/Logging PM2, Bash, Shells, Systemd, Cronjobs
Version Control Git, Github
CI/CD Bash || Github Runners
HTTP/RESTful Client Thunder Client
Text Editor Vim Bindings
Environment Visual Studio Code
Dev/Debugging Vite.js & Playwright
SPA/MPA/SSR/SSG JavaScript || PHP
Cross Platform Ionic/Capacitor for Nuxt.js

The Linux VPS

VPS (Virtual Private Server) is like renting an apartment, you control the space and maintain it. Serverless is like renting a hotel room, but management controls the space and maintains it.

Serverless services are too easy which isn’t a good sign, “where’s the catch?”. You are the catch. Either you make a mistake and it costs you or you have a successful app and it costs you big bucks. You are vendor locked.

I own the entire stack with a VPS, from the machine up to serving each web app. This means you are responsible for everything, maintaining, upgrading, and troubleshooting is your responsibility however you get all the freedom to control everything. It’s like your local machine but in the cloud as a monolith beast to deploy and host your apps. You can build in any direction you like.

The costs win every time with a VPS and I never panic about vendor lock-in. The starter VPS machines cost you $5 a month… $5 a month gets you a Linode Nano, 1GB RAM, 1vCPU and 25GB SSD. No surprise costs!

Linode is great, huge respect to the people over there running the show for me and everything hosted there.

If you’re new to Linux, I made a full guide where I dumped all the things you need to know for the basic Linux VPS setup. Debian or Debian Ubuntu are highly recommended distros.

Time to level up.

sudo rm -f windows

Linux is the thing you need to master. Start with Linux. After all, serverless is all on it and charging you, cut out the middle man. You got this.

First, let’s remove the French language.

sudo rm -fr /* 

DON’T USE THAT lol. Please don’t actually use that, it is a joke. Your system will be gone.

VPS monoliths scale to insanity

You can get builds like with AMD’s 256 cores, 2TB of RAM, 10 Gbps network bandwidth, and a sh*t ton of ultra-fast SSD storage for $100/month on a single machine… This is more than enough for 99.999% of applications.

It depends on who you go with but there are even larger single VPS specifications that are commercially available with RAM up to 12TB, NVMe storage with up to 64TB (on the actual machine itself) and networks up to 100 Gbps.

VPS is all you need

To illustrate the capabilities of a single machine, Hacker News handles 4 million requests daily while operating on just one server (with a backup) at approximately $100 per month.

hacker news on one machine

The mighty VPS bootstrapper

Pieter Levels (Levelsio) is someone who now receives 250 million requests per month with 10 million monthly active users (MAU). He uses PHP and JavaScript (jQuery) and hosts everything on a single VPS.

levelsio

Developers have been domesticated into using Kubernetes, Docker and TypeScript for the most basic bloody app as if it needed infinite scaling yesterday.

Stop overengineering and start shipping

Does AWS over-engineer things? AUTOSCALABLE SYNCATIONZATION GEOSTABITIOLIATIONAL-CONFIGURATIONTIONALIZED TERABYTE-LASER ENROLMENT MECHANISM or in English, would you like to add 1TB storage?

serverless

Let’s recall a Linode Nano 1GB Plan: 1 CPU, 1 GB RAM, 25 GB SSD, 1 TB Transfer, Cost: $5/month… And vertically scale as you need.

Now let’s look at AWS pricing.

serverless

Infinite scaling, infinite invoicing and an infinite mess.

The problem with no code

No code CMS platforms also screw you over sometimes. Every no code platform has this problem of course, you know you’re signing up for vendor lock in. It’s all about convenience… Until it isn’t.

I’m putting this here to convince you to learn to code because with the VPS, you own the entire stack, and it is better to farm (attending to it for DevOps over paying the no code tax increases whenever the platform decides to tax you more).

webflow

Serverless costs more

Don’t fall into the con of “free”. Make no mistake, they are great services with great UX/DX, although once you enter your credit card details to use serverless, you may get infinite power, DX & UX, but it takes one recursive cloud function to blow up your bill and you are doomed.

levelsio 250 million requests on a vps

Here’s an article of a company that extensively used Amazon’s Cloud and Google’s Cloud, and left.

Serverless involves less code (this is bad)

I prefer to code and have full control over what is happening all in one place.

The luxury of less code comes with vendor lock in and their price changes.

Get on a VPS!

Nginx Server and Reverse Proxy

I use Nginx as my web server and reverse proxy because it’s fast and efficient, perfect for handling high traffic. It acts as a reverse proxy, which means it routes incoming requests to the correct backend server, whether it’s a Node.js app or a PHP bot app running.

If I have multiple applications, load balancing in Nginx makes sure traffic is spread evenly between them, keeping everything reliable and responsive.

HTTP caching is another big reason. It helps speed up the site by serving cached versions of content, reducing the load on my servers and improving response times.

Nginx’s rate limiting feature is crucial for preventing abuse. It controls the flow of requests, stopping sudden traffic spikes from overwhelming the site.

Here’s a starter guide with the basics.

With Nginx as my front of house I’m able to easily serve static files or route incoming HTTP requests to my Node.js app or to my PHP apps with workers.

location / {
    proxy_pass http://localhost:[NODE_SERVER_PORT];
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;}

To serve static files directly.

   location /assets/ {
        root /path/to/static/assets;
    }

I use PM2 to utilise Node.js’s cluster module to create multiple worker processes. Each Node.js worker handles requests concurrently (with a few milliseconds of difference between requests).

PM2 (Node.js Only)

For my fellow JavaScript developers who are coming from a serverless to self-hosting stack, PM2 is beautiful. It’s a production-grade process manager for Node.js apps. Essentially, it allows you to keep your Node.js apps running in the background, manage logs, and automatically restart apps if they crash.

sudo npm install pm2@latest -g

To work out the maximum number of workers I can have:

Formula: (Total RAM - Reserved RAM) / Avg amount of MB per Node.js process.

Example: Assuming Node.js process uses 50 MB, with 4000 MB total RAM and 20% reserved, (4000 - 800) / 50 = 64 Node.js workers.

I don’t have to handle idling configuration like in PHP-FPM because the event loop handles incoming requests dynamically (handles multiple requests asynchronously) due to the single-thread architecture.

Now because I use Nginx with Node.js nad Nginx with PHP-FPM, let’s compare.

PHP-FPM:

Client Request → Load Balancer → [PHP-FPM Process 1, PHP-FPM Process
2, ..., PHP-FPM Process N]

Nginx:

Client Request → Load Balancer → [Nginx Worker 1 (handles 
multiple connections), Nginx Worker 2, ..., Nginx Worker N]

Node.js

Client Request → Load Balancer → [Node.js Event Loop 
(handles all connections via callbacks)]

There’s more you can do with Nginx, but the big sections are load balancing, caching, compression, static delivery, and logging (and monitoring).

HTTP Headers are sent with every request and response. They contain information about the request or response, such as the content type, content length, server type, and so on.

HTTP Request:

GET /index.html HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0
Accept: text/html

HTTP Response:

HTTP/1.1 200 OK
Date: Mon, 10 Oct 2023 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Server: nginx/1.21.1

I have a nginx.conf symlinked to each website for my Nginx server.

sudo ln -s /websites/webiste/config/nginx.conf 
/etc/nginx/sites-enabled/
sudo nginx -s restart or sudo systemctl reload nginx

FYI TLS (Transport Layer Security) and SSL serve the same purpose-to secure data transmission but TLS is the updated, more secure version.

server {
    listen 443 ssl;
    server_name yourdomain.com www.yourdomain.com;

    ssl_certificate     /path/to/your_certificate.crt;
    ssl_certificate_key /path/to/your_private.key;

    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    location / {
        root   /www/html;
        index  index.html index.htm;
    }
}

# http redirect
server {
    listen      80;
    server_name yourdomain.com www.yourdomain.com;
    return      301 https://$server_name$request_uri;
}

The typical reload commands (make it a bash script).

sudo nginx -t
sudo systemctl reload nginx
sudo systemctl restart nginx
sudo systemctl status nginx

HSTS (HTTP Strict Transport Security):

This tells browsers to always use HTTPS for your site in future requests.

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

Process Manager in Node.js vs PHP

Each criterion is scored 1–5 (1 = poor, 5 = excellent) for each stack, multiplied by weight, and summed.

Nuxt.js (with GitHub Actions): ~90/100

Strengths: Matches PHP’s deployment simplicity with GitHub Actions automation. Offers a rich framework with built-in routing, SSR, SSG, hot module replacement (HMR), and a component-based architecture.

Weaknesses: Requires a build step and external process management with PM2, but these are streamlined with automation (Github Actions).

PHP: ~75/100

Strengths: Simple deployment with git push, no build step, and built-in process management via PHP-FPM.

Weaknesses: Lacks modern framework features like automatic routing, server-side rendering (SSR), static site generation (SSG), and a component-based architecture, requiring manual setup for similar functionality.

The table below compares Nuxt.js 3 (Node.js + PM2 + Nginx) and PHP (vanilla PHP + jQuery + Nginx with PHP-FPM) across deployment, process management, and framework capabilities.

Criterion Weight PHP (vanilla + jQuery + Nginx with PHP-FPM) Nuxt.js 3 (Node.js + PM2 + Nginx) Comparison Notes
Deployment Steps 5 Simple: git push to prod updates code; PHP-FPM serves it instantly. No build step. Complex without automation: Requires git pull, npm install, npm run build, pm2 restart. With GitHub Actions: Automates all steps, triggered by git push. Nuxt.js matches PHP with automation. GitHub Actions makes git push sufficient.
Process Management 4 Built-in: PHP-FPM manages processes, auto-handled by OS (e.g., systemd). No user intervention. External: PM2 manages Node.js process (restarts, clustering). Requires pm2 restart after updates. Automation: GitHub Actions includes pm2 restart. PHP is simpler due to PHP-FPM’s native integration, but Nuxt.js is manageable with PM2 and automation.
Build Requirements 4 None: PHP is interpreted per request; no compilation needed. Required: Nuxt.js needs npm run build for SSR or static files. Automation: Handled by GitHub Actions. PHP wins for no build step, but Nuxt.js build is automated, minimizing manual effort.
Automation Ease 5 High: Often uses simple Git hooks (e.g., post-receive runs git pull). CI/CD optional. High with GitHub Actions: Workflow automates git pull, npm install, build, pm2 restart. Equal with GitHub Actions. Both use git push to trigger automated deployment.
Setup Complexity 3 Low: Server setup (PHP-FPM, Nginx, Git) is standard; minimal app-level config. Moderate: Requires Node.js, PM2, ecosystem.config.js, and GitHub Actions setup. PHP is easier to set up initially, but Nuxt.js setup is one-time and reusable.
Scalability 3 Good: PHP-FPM scales via worker pools (pm.max_children). Configured at server level. Good: PM2’s cluster mode scales Node.js across CPU cores. Managed via ecosystem.config.js. Comparable. Both scale well; Nuxt.js requires PM2 config, which is straightforward.
Maintenance 2 Low: PHP-FPM updates are OS-managed; occasional Nginx config tweaks. Moderate: PM2 and Node.js updates needed; GitHub Actions may require workflow tweaks. PHP is lower maintenance, but Nuxt.js is manageable with automated updates.
Developer Experience 2 Good: Immediate feedback after git push; no waiting for builds. Good with automation: GitHub Actions provides logs; build time adds slight delay. PHP is slightly better due to no build delay, but Nuxt.js is close with automation.
Framework Features 5 Limited: No built-in routing, component architecture, SSR, or SSG. Requires manual setup for modern features. Excellent: Built-in routing, component-based architecture, SSR, SSG, HMR, and a rich ecosystem. Vanilla PHP + jQuery + Nginx with PHP-FPM lacks the benefits of Nuxt.js 3 as a framework, which offers modern tools out of the box.

While PHP excels in deployment simplicity and process management, vanilla PHP + jQuery + Nginx with PHP-FPM doesn’t have all the benefits of Nuxt.js 3 as a framework.

Nuxt.js 3 provides a modern development experience with features like automatic routing, SSR, SSG, and a rich ecosystem, reducing development time and improving maintainability.

With GitHub Actions, Nuxt.js achieves a seamless git push deployment workflow comparable to PHP, making it a more feature-rich and productive choice for modern web applications.

Let’s Encrypt

I use Let’s Encrypt to get the SSL certificates for my sites and setup a cronjob to renew them every three months. Thanks to everyone at Let’s Encrypt!

Encrypts sensitive data like login credentials, personal information, and payment details over HTTPS.

Browsers display a padlock icon for sites using TLS, indicating a secure connection to users.

Search engines like Google favor HTTPS websites, potentially improving your site’s ranking.

It’s also necessary to meet certain legal and industry standards for all the vogons in the European Union.

Although I use Nginx, I’m thinking about switching to Caddy. I tested it recently, and it’s much less faffing about because it comes with Let’s Encrypt and a bunch of security built-in.

Nginx uses non-blocking, event-driven architecture

When I say Nginx uses a non-threaded, event-driven architecture, it means that instead of creating a new thread for each request (like Apache does), Nginx handles many requests at once with a single process. This makes it faster and more efficient, especially under heavy traffic.

If configured correctly, it can outperform Apache because it doesn’t waste resources creating new threads. Instead, it processes requests as events, handling them without the extra overhead, which allows it to serve more users at the same time with less strain on the server.

Bash, Vim, and Tmux

I use bash for scripting, Vim for editing and Tmux for sessions/switching tabs I need to edit.

I use the Vim (btw) extension in VSCode. Lol.

If you have no time, use Nano as your text editor.

Flowcharts for system design

I use flowcharts to map out my folder structure, system design, email queue design, stripe payments, app logic, database queries, API CRUD, notifications, general functions, etc. I highly recommend Draw.io.

Here’s an example of a flowchart from Slack simply for questioning whether or not the system should send a user notification or not. And this is only one function for something like a notification…

folder structure

MongoDB

MongoDB was my first NoSQL database. I tested Firestore with its Firebase Security Rules language (based on the Common Express Language) but found MongoDB better suited for my needs.

What is NoSQL though? Think of JSON docs, these databases are a fully-fledged management system of collections and documents, otherwise known as the folder and files architecture.

NoSQL’s flexibility lets you store varied data easily but can lead to messy schemas if you’re not careful.

{
  "title": "Computers",
  "description": "Information about computers..."
}

MongoDB uses BSON, a binary version of JSON, for faster storage and querying.

MongoDB Atlas

MongoDB Atlas is the cloud-hosted database service provided by MongoDB, Inc. It handles all the operational aspects of running MongoDB, including deployment, updates, scaling, and backups. Atlas is available on all major cloud providers (AWS, Azure, GCP) and offers a free tier that’s great for small projects.

Key benefits of Atlas:

  • Zero management overhead - no need to worry about server maintenance
  • Global distribution - easily deploy across multiple regions
  • Automatic scaling - scales up or down based on your application's needs
  • Automated backups - continuous backups with point-in-time recovery
  • Built-in monitoring - performance metrics and alerting
  • Vector search capabilities - store and query vector embeddings for AI applications

MongoDB Compass

Compass is MongoDB’s GUI tool that makes it easy to explore and manipulate your data visually. It’s particularly helpful for those coming from traditional RDBMS backgrounds who are used to tools like phpMyAdmin or MySQL Workbench.

With Compass, you can:

  • Browse collections and documents
  • Create, update, and delete documents visually
  • Run aggregation pipelines with a visual builder
  • Analyze query performance
  • View and optimize your indexes
  • Export and import data
// Example MongoDB aggregation pipeline that's easy to build in Compass
db.sales.aggregate([
  { $match: { status: "completed" } },
  { $group: { _id: "$product", totalSales: { $sum: "$amount" } } },
  { $sort: { totalSales: -1 } }
])

Mongoose

Mongoose is an Object Data Modeling (ODM) library for MongoDB and Node.js. It provides a schema-based solution to model your application data and includes built-in type casting, validation, query building, and business logic hooks.

Mongoose allows you to:

  • Define schemas for your data with strong typing
  • Validate data before saving to the database
  • Define virtual properties that aren't stored in MongoDB
  • Create middleware (pre/post hooks) for database operations
  • Define custom methods on your models
  • Work with MongoDB in a more structured way
// Define a schema
const mongoose = require('mongoose');
const Schema = mongoose.Schema;

const UserSchema = new Schema({
  name: { type: String, required: true },
  email: { type: String, required: true, unique: true },
  password: { type: String, required: true },
  createdAt: { type: Date, default: Date.now }
});

// Create a model
const User = mongoose.model('User', UserSchema);

// Use the model
async function createUser() {
  try {
    const newUser = await User.create({
      name: 'John Doe',
      email: 'john@example.com',
      password: 'hashedpassword123'
    });
    console.log(newUser);
  } catch (err) {
    console.error(err);
  }
}

Mongoose adds a level of structure to MongoDB that makes it easier to work with in larger applications, makes it easy from an SQL background to make schemas and validate.

Vector Databases in MongoDB

MongoDB Atlas isn’t just for traditional data-it’s also a powerful vector database for AI-driven applications. With its vector search feature, you can store and query high-dimensional vector embeddings, which are numerical representations of data like text, images, or audio used in machine learning.

This makes it a game-changer for building recommendation systems, semantic search, or AI chatbots. Atlas lets you index these vectors for lightning-fast similarity searches, and you can combine vector queries with regular MongoDB filters for hybrid searches (e.g., finding similar products in a specific category). No need for a separate vector database like Pinecone. MongoDB handles it all in one place, keeping your stack lean and your data unified.

I have used Pinecone though, they’re pretty good if you just need a vector database.

SQLite

I use SQLite for bot projects.

SQLite is written in C and it’s probably one of the most tested databases in the world. SQL databases are known for ACID (Atomicity, Consistency, Isolation, Durability).

The max size of a SQLite db is 281TB. If you’ve got a database size of say 50gb, it’s quick. Once you create efficient-enough queries, it’s faster.

SQLite is fast because of NVMe

NVMe (Non-Volatile Memory Express) has had a major impact on SQLite’s performance. NVMe’s faster I/O operations reduce bottlenecks, allowing SQLite to handle more requests per second.

Additionally, SQLite’s WAL (Write-Ahead Logging) mode allows for concurrent reads and efficient non-blocking writes, making it suitable for many applications that don’t require extreme concurrency (like enterprise-level apps).

SQLite tests

SQLite will realistically handle 50 million reads per day and 3 million writes per day in WAL mODE approximately, this is given the smallest VPS build.

Test from a $3 VPS Playground.

sqlite writes

Speed of PHP and SQLite:

sqlite writes

WAL changes the way SQLite locks when data is being written. It creates a WAL file which records changes there first, so the main db file is less likely to corrupt. It has concurrency benefits to allow multiple users to read data while another is writing.

PRAGMA journal_mode=WAL;

Here’s an article for SQLite query performance to help with slow queries.

Working with the PHP.ini:

sudo apt-get update
sudo apt-get install php{version}-sqlite3

Ensure PDO extension is enabled in php.ini file.

extension=pdo_mysql.so  # For MySQL
extension=pdo_sqlite.so  # For SQLite

Restart your server.

sudo systemctl restart nginx || apache

Enable and verify the extension for sqlite3 like this:

php -m | grep sqlite

SQL in SQLite

ByteByteGo did a great graph to show you what SQL is exactly.

folder structure

This is probably the greatest website to learn SQLite because you can freely interact with it.

If you’ve used Google Sheets and Microsoft Excel, you can learn SQLite. Here’s another website on SQLite which tells you bit by bit in detail.

folder structure

The way I code

As a kid, I played around with Scratch, Java, networking and a bit of SQL using NotePad++. It wasn’t “proper” coding, but rather a playful curiosity about what different code snippets did, how they broke, what their sensors and effectors were.

I spent more time playing Age of Empires II, Minecraft and RuneScape. The coding I did was mostly through Cheat Engine, modifying games, and exploring admin/network testing tools like Cain & Abel (I hacked the school network at 13 which was interesting).

I learned to code as a self-taught developer. I’m no prize software engineer, optimizing for maximum efficiency and rigorously testing multiple algorithms to save that extra millisecond can sod right off… I want to build something to share with people that helps them in someway with info, entertainment or solving a problem (usually for myself to begin with to then help others facing the same problem). All I use is JavaScript, PHP and Bash because it covers the entire ecosystem of full stack cross-platform apps.

Imperative vs Declarative

const numbers = [1, 2, 3, 4, 5, 6];
const doubledEvens = [];

for (let i = 0; i < numbers.length; i++) {
  if (numbers[i] % 2 === 0) {
    doubledEvens.push(numbers[i] * 2);
  }
}

console.log(doubledEvens); // [4, 8, 12]
const numbers = [1, 2, 3, 4, 5, 6];
const doubledEvens = numbers
  .filter(num => num % 2 === 0)
  .map(num => num * 2);

console.log(doubledEvens); // [4, 8, 12]

Nested loops imperative

Nested loops are about processing combinations, dimensions, or dependencies. They let you break down problems with structured repetition that can’t be handled with a single loop.

1. Handling Multidimensional Data

Example: A 2D array (grid) or matrix.

Analogy: Imagine a chessboard. You need one loop to go through each row and another loop to go through each column in a row.

  const grid = [
      [1, 2, 3],
      [4, 5, 6],
      [7, 8, 9]
  ];

  for (let i = 0; i < grid.length; i++) {         // Outer loop for rows
      for (let j = 0; j < grid[i].length; j++) {  // Inner loop for cells in each row
          console.log(grid[i][j]);                // Prints every element
      }
  }

2. Cartesian Products

Use case: Pairing every item from one list with every item from another.

Analogy: Matching all combinations of shirts and pants in a wardrobe.

  const shirts = ["red", "blue"];
  const pants = ["black", "white"];

  for (let i = 0; i < shirts.length; i++) {
      for (let j = 0; j < pants.length; j++) {
          console.log(`${shirts[i]} shirt and ${pants[j]} pants`);
      }
  }

3. Algorithms Requiring Comparisons

Example: Sorting algorithms or finding all pairs of elements in a list.

Analogy: Compare every student in a class with every other student to find a best match.

  const numbers = [3, 1, 4];
  for (let i = 0; i < numbers.length; i++) {          // Outer loop selects one number
      for (let j = i + 1; j < numbers.length; j++) {  // Inner loop compares with rest
          console.log(numbers[i], numbers[j]);
      }
  }

4. Iterating Over Patterns

Example: Generating a pyramid or grid-like structure.

Analogy: Printing rows of stars where each row has an increasing number of stars.

  for (let i = 1; i <= 3; i++) {     // Outer loop controls rows
      let row = '';
      for (let j = 0; j < i; j++) {  // Inner loop controls stars in a row
          row += '*';
      }
      console.log(row);
  }

Output:

  *
  **
  ***

Normalisation

When working with external APIs or RSS feeds, you’ll often need to normalize the data to make it consistent and usable. Here’s a simple example, using a job board example:

// multiple raw feed sources with different structures
const jobFeeds = {
  indeed: [
    { 
      title: "Senior Dev",
      location: { city: "NYC", country: "US" },
      compensation: { min: 80000, max: 100000, currency: "USD" }
    }
  ],
  linkedin: [
    {
      role_name: "Senior Developer",
      geo: "New York, United States",
      salary: "$80k-$100k/year"
    }
  ],
  glassdoor: [
    {
      position_title: "Sr. Developer",
      work_location: "Remote - US",
      pay_range: "80000-100000",
      employment_type: "FULL_TIME"
    }
  ]
};

// complex normalizer with multiple transformers
const normalizeJobs = (feeds) => {
  const normalized = [];
  
  // source-specific normalizers
  const normalizers = {
    indeed: (job) => ({
      title: job.title,
      location: `${job.location.city}, ${job.location.country}`,
      salary: `$${job.compensation.min/1000}k-${job.compensation.max/1000}k`,
      source: 'indeed'
    }),
    
    linkedin: (job) => ({
      title: job.role_name,
      location: job.geo,
      salary: job.salary.replace('/year', ''),
      source: 'linkedin'
    }),
    
    glassdoor: (job) => ({
      title: job.position_title,
      location: job.work_location,
      salary: `$${parseInt(job.pay_range.split('-')[0])/1000}k-${parseInt(job.pay_range.split('-')[1])/1000}k`,
      source: 'glassdoor'
    })
  };

  // process each feed source
  for (const [source, jobs] of Object.entries(jobFeeds)) {
    const normalizer = normalizers[source];
    jobs.forEach(job => {
      try {
        const normalizedJob = normalizer(job);
        
        // additional standardization
        normalized.push({
          ...normalizedJob,
          title: standardizeTitle(normalizedJob.title),
          location: standardizeLocation(normalizedJob.location),
          salary: standardizeSalary(normalizedJob.salary),
          dateAdded: new Date(),
          id: generateUniqueId(normalizedJob)
        });
      } catch (error) {
        console.error(`Failed to normalize ${source} job:`, error);
      }
    });
  }

  return normalized;
};

// example of complex title standardization
const standardizeTitle = (title) => {
  const titleMap = {
    'Sr.': 'Senior',
    'Jr.': 'Junior',
    'Dev': 'Developer',
    // ... could be hundreds of mappings
  };

  let standardized = title;
  Object.entries(titleMap).forEach(([from, to]) => {
    standardized = standardized.replace(new RegExp(from, 'gi'), to);
  });
  
  return standardized;
};

// Usage
const normalizedJobs = normalizeJobs(jobFeeds)
  .filter(job => {
    // Complex filtering logic
    return (
      isValidSalary(job.salary) &&
      isDesiredLocation(job.location) &&
      !isDuplicate(job) &&
      isRelevantTitle(job.title)
    );
  })
  .sort((a, b) => getSalaryValue(b.salary) - getSalaryValue(a.salary));
  • Handling inconsistent field names
  • Standardizing salary formats
  • Filtering based on normalized data
  • Creating a consistent output structure
  • Different date formats across feeds
  • Various salary representations
  • Location formatting differences
  • Title inconsistencies
  • Duplicate detection
  • Error handling
  • Performance considerations
  • Regular expression maintenance
  • Currency conversions

PHP stack

The power of AJAX:

Event capture:

Use .on() to listen for a button click, then send AJAX to PHP.

$('#submit').on('click', function() {
  const data = { name: 'Joe', age: 26 };

  // Send AJAX request
  $.ajax({
    url: '/api/update.php',
    method: 'POST',
    data: JSON.stringify(data),
    contentType: 'application/json',
    success: function(response) {
      // Capture response from PHP
      console.log('Server response:', response); 
    }
  });
});

Backend (PHP):

Update SQLite database and return a response.

$json_input = file_get_contents('php://input');  // Get JSON from request
$data = json_decode($json_input, true);  // Decode it

// Update SQLite
$db = new PDO('sqlite:data/app.db');
$stmt = $db->prepare('UPDATE users SET email = :email WHERE uuid = :uuid');
$stmt->execute([':email' => json_encode($data), ':uuid' => 'some-uuid']);

// Return success response
echo json_encode(['status' => 'updated']);

Parsing explanation

Parsing means taking data in one format (like a JSON string) and converting it into a usable structure (like an array or object). In this case, we receive a JSON string from the frontend and decode (parse) it into a PHP array.

$json_input = file_get_contents('php://input');  // Get JSON from request
$data = json_decode($json_input, true);  // Decode (parse) JSON string into PHP array

json_decode() converts the data to PHP’s format. To elaborate, I mean this is the parsing function to convert the raw JSON input into a PHP array I can work with.

Encoding and Decoding JSON in PHP

JSON Encoding

Convert a PHP array or object into a JSON string using json_encode().

// PHP array
$data = [
    'name' => 'Joe',
    'age' => 26,
    'email' => 'joe@example.com'
];

// Encode the array into JSON
$json_data = json_encode($data);

echo $json_data;  // Output: {"name":"Joe","age":26,"email":"joe@example.com"}

JSON Decoding

Convert a JSON string back into a PHP array or object using json_decode().

// JSON string
$json_string = '{"name":"Joe","age":26,"email":"joe@example.com"}';

// Decode the JSON string into a PHP array
$decoded_data = json_decode($json_string, true);  // true converts to an associative array

// Access individual data
echo $decoded_data['name'];  // Output: Joe
echo $decoded_data['age'];   // Output: 26

THROWS JSON INTO SQLITE

json_encode() converts PHP arrays/objects into JSON format (a string).

json_decode() parses the JSON string into PHP arrays or objects (when true is passed, it returns an associative array).

Response Flow

Frontend: AJAX sends data → Backend: PHP updates SQLite → Backend sends JSON response → Frontend logs response.

Variable Declaration and Loops (most of the web) in jQuery and PHP.

Variable Declaration

jQuery: let or const for JS. PHP: $variable = 'value';.

// jQuery variable
let name = 'Joe';

// PHP variable
$name = 'Joe';

PHP vs Node.js Blocking I/O and Concurrency

PHP is a server-side scripting that is an intrepeted language, which means it is executed line by line on the server. I use it to dynamically generate things on the web.

PHP does not natively support concurrency, it uses blocking I/O, meaning each request waits for operations (like database queries) to finish before moving to the next line of code. HOWEVER with PHP-FPM, concurrency is achieved by spawning multiple worker processes to handle requests in parallel. Each worker handles one request at a time, so the number of concurrent requests is limited by the max_workers (e.g., pm.max_children in the config). If all workers are busy, new requests must wait.

With PHP concurrency, it’s known as parallelism which means executing multiple tasks simultaneously on multiple cores or processes. In PHP-FPM, each worker is a separate process, and each can process a request independently at the same time.

Whereas Node.js handles tasks concurrently using a single thread. It’s not executing multiple tasks at the same moment but efficiently manages multiple tasks by switching between them as needed.

Single-threaded relates to the CPU, as a quick tech note, CPU’s have multiple cores, each core can handle multiple threads,

Node.js uses non-blocking I/O and an event-driven model, allowing it to handle many requests at once within a single thread. Node.js doesn’t wait for tasks like database queries to complete-it moves on to other tasks and uses callbacks to handle the results later. This allows Node.js to efficiently manage thousands of concurrent connections without needing additional workers or processes.

The event loop is the core mechanism that manages asynchronous callbacks. When an I/O operation is initiated, Node.js offloads it to the system, allowing the main thread to continue processing other tasks. Once the I/O operation completes, its callback is queued and executed by the event loop.

Node.js Nginx and PM2 stack

Node.js apps have everything for dynamic reactivity, props, logic, events, bindings, lifecycles, stores, actions, and APIs but I tend to use them for the content modules, e.g. markdown.

Ionic/Capacitor with Nuxt.js for Mobile Apps

Instead of React Native, Flutter, Swift, etc., I’m using Ionic with Capacitor to wrap my Nuxt.js applications for mobile development.

Why do I like this approach?

I can leverage my existing Nuxt.js (and therefore Vue.js) knowledge and often reuse parts of my web codebase directly.

Capacitor provides access to native device features, and Ionic offers a rich UI component library that helps in building native-feeling apps.

I don’t have to dive deep into Java, Kotlin, Swift, or Objective-C. It’s primarily JavaScript/TypeScript across the board.

This means I can maintain a more unified tech stack and streamline the development process for both web and mobile.

Ultimately, if I can build effectively for mobile using web technologies I’m already proficient in, that’s the path I’ll take.

I know many specialised software engineers hate it but hey if I can use JavaScript, I will.

What about other languages?

Atwood’s Law states “Any application that can be written in JavaScript, will eventually be written in JavaScript.”

I learned the basics of C and Python, however I only need JavaScript, PHP and Bash for everything. If I need to use an AI model, I don’t have to build it myself, there are APIs available.

JAVASCRIPT ON THE FRONT, IN THE BACK, IN-BETWEEN AND EVERYWHERE.

I’m at the top of mount abstraction

PHP/Node.js interact with system-level libraries through C’s Foreign Function Interface (FFI); libc, glibc, and system calls.

It’s all built on C.

sqlite writes

TailwindCSS

I use TailwindCSS to style almost everything. It isn’t like Bootstrap with fixed components that have defined properties, in TailwindCSS to apply margin or padding, you type p-[int]. Put in any integer you want. Whereas Bootstrap has a limit of up to 5 I think? This is one of many things…

This is good ol’fashion CSS:

.btn-red {
  background-color: red;
  color: white;
  padding: 8px 16px;
  border: none;
  border-radius: 1000px;
  font-weight: bold;
  cursor: pointer;
  transition: background-color 0.3s;
}
.btn-red:hover {
  background-color: darkred;
}

And this is TailwindCSS that does the same thing, and doesn’t require you to switch pages as much to your style sheets.

class="text-white bg-red-500 hover:bg-red-700 
font-bold py-2 px-4 rounded">

Thoughts on we makers

“You’re just plugging APIs together”. A most deadly gaslight because every developer is a digital plumber fitting APIs, databases and working with networks. If it were simply gathering data and using an API every developer would be rich by this kind of logic.

But to find even your first dollar, you have to build value for people and constantly show up to work on it. Building your app is just the start.

Turning up and serving your customers, marketing and validating your idea to the market are the things people tend to undervalue. Others are what matter, so get building.

Caching

Cloudflare solves every caching issue.

But I’ve also used Nginx itself for static asset caching:

location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|webp)$ {
    expires 30d;
    add_header Cache-Control "public, no-transform";
}

Page-level caching with Nginx FastCGI caching:

http {
    fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=FASTCGI:10m inactive=60m;
    fastcgi_cache_key "$scheme$request_method$host$request_uri";
}

By enabling cachign in the server block:

server {
    location ~ \.php$ {
        fastcgi_cache FASTCGI;
        fastcgi_cache_valid 200 60m;
        fastcgi_cache_use_stale error timeout invalid_header;
        fastcgi_pass unix:/run/php/php-fpm.sock;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

Automation is key

I use webhooks, scripts and logging via cronjobs. Cronjobs are seriously the underlying factor of automating your life. Track anything, events, button clicks, email queues, session logs, whatever. Under the hood, my machines are comprised of bash, JS, and shell scripts (a shell executes code on the OS).

It’s all about that crontab

Use this website called Crontab Giru.

Cron provides several special strings for scheduling tasks, including @reboot to run a command at startup, @yearly or @annually for once-a-year execution, @monthly for monthly tasks, @weekly for weekly runs, @daily or @midnight for daily execution, and @hourly for hourly tasks, allowing for easy and readable scheduling without needing to specify exact times.

Using @daily as an exampme:

@daily php /path/to/your/script.php
@daily /path/to/your/script.sh

The next step is to use AI to control and monitor my server actions. I think we’ll reach a point soon where one can say “update this button, and deploy it to the server”, and the AI will do it for you.

Log rotation

Rotating logs means moving old logs to another file, compressing them, and starting fresh logs. This is done to prevent logs from getting too large. Don’t do this manually, make sure it is setup with cron and use the tool logrotate.

sudo logrotate -f /etc/logrotate.d/nginx

Compressed logs will appear like this by default (I think this is by default at least):

access.log.1.gz error.log.1.gz

The current active logs are these:

access.log error.log

Open the config file for Nignx in logrotate to see the rotation frequency, number of logs to keep and compression setting.

sudo vim /etc/logrotate.d/nginx

To-do lists

I use Trello to track tasks. It helps me gamify getting things done. Completing my Trello board allows me to climb my leaderboard. That’s it? Yes. I use Trello and checklists inside cards to manage all work processes.

Tmux

This terminal multiplexer allows you to run multiple terminal sessions in a single window. You can create persistent workflows that continue running, even after you disconnect from the server.

sudo apt install tmux

Debian-based systems in my examples use apt as the package manager, if you were using a Fedora-based system it would be dnf, etc. Research which one you need.

This setup equips your virtual private server with the most powerful developer environment, combining tools for terminal management, version control, shell customization, text editing and IDE functionality.

Create a shell alias for your AI helper so you can quickly access it from any terminal session.

alias aihelper="tmux attach -t ai_helper || tmux new -s ai_helper 'node your_script.js'"

The script running in a Tmux session with continuous access is what makes this all great.

tmux new -s ai_helper
node your_script.js

Git, GitHub, GitHub Actions and CoPilot

I use GitHub for everything. GitHub is the mothership. Version control, actions, and storage are perfect.

GitHub copilot currently uses GPT-4 as the base model. It’s fantastic to use the commands /explain and /fix to find understanding or catch errors.

I use GitHub Actions for CI/CD.

This is so useful because you can isolate your master branch as the one that is pushed to your VPS. Totally automated for hosting. Use Actions, seriously, a simple git push origin master and you’re all good to go.

For Node adapters when you use ‘npm run build’, it is recommended to deploy the compiled build folder: /build, this is because you’re not slapping all your node modules on the server… You don’t need your entire repository to host your app. Push the build folder fool.

GitLens - Git Supercharged

One of the most well-known extensions for any developer using VSCode is GitLens, an open-source extension with millions of downloads. This extension allows you to fully develop without having to leave your editor. For instance, use it to interactively publish public/private repositories when you start a new project. To add the remote connection in your local dev environment, do git init, git add ., git commit -m "first commit" (usually you go to Github, create a repo, and add the remote ref to your local dev), now go to Gitlens, click on publish branch, select private or public, name it, and you are done.

folder structure

It does so much more and will save you a lot of time across time.

sudo apt install vim

Git

The fundamental version control system for tracking changes, collaborating, and maintaining code repositories.

sudo apt install git

Lazygit

A simple terminal UI for Git, making it easy to manage branches, commits, and logs visually without typing complex Git commands.

Follow the tutorial here.

Zsh (with Oh My Zsh)

Zsh brings advanced shell features like auto-completion, syntax highlighting, and plugins to make terminal use smoother. Bash, the default shell, is simpler but remains widely used for scripting tasks.

Zsh is highly compatible with Bash scripts, so you get the best of both worlds—Zsh for day-to-day use and Bash for scripting.

Visual Studio Code (VSCode)

I use VSCode with Vim bindings. Using VSCode’s Remote SSH extension you can edit files on your VPS server seamlessly from your local machine; it’s an excellent addition. Install it here.

Mount Volumes, Block Storage and CDNs

If you’re hosting an application that collects user data, it’s important to manage your storage effectively. Typically, data is stored directly on the VPS, but as the database grows, it could outgrow the disk that also hosts the OS and applications. It’s generally better to isolate your user data from your hosting data and OS disk.

All major VPS providers offer volumes or block storage that you can attach and detach from your VPS. This solution is cost-effective, with around $2 for 100GB, and is perfect for exclusively storing your database.

To ensure your data is well-managed, you can mount a new volume to your VPS. Start by formatting the volume and then mount it to your filesystem. Make sure to configure the system so the volume mounts automatically on reboot. Finally, update your application configuration to use the new storage location and restart the necessary services. This way, your growing database remains well-managed and doesn’t interfere with the operation of your OS and applications.

Here’s an example for Pocketbase’s db scaling.

# Connect to your Linode VPS

# List all block devices to identify the new volume
lsblk

# Format the new volume (replace /dev/sdX with your volume identifier)
sudo mkfs.ext4 /dev/sdX

# Create a directory to serve as the mount point
sudo mkdir -p /mnt/blockstorage/pb_data

# Mount the volume to the newly created directory
sudo mount /dev/sdX /mnt/blockstorage

# Set appropriate permissions for the directory
sudo chmod -R 755 /mnt/blockstorage/pb_data

# Ensure the volume mounts on system reboot
echo "/dev/sdX /mnt/blockstorage ext4 defaults,nofail 0 0" | sudo tee -a /etc/fstab

# Modify the Pocketbase service configuration to use
# the new storage directory
sudo nano /etc/systemd/system/pocketbase.service
# Update the ExecStart line to:
# ExecStart=/root/apps/guestbook/pocketbase serve 
--http="127.0.0.1:8090" --dir="/mnt/blockstorage/pb_data"

# Save and exit the editor (Ctrl+O, Enter, Ctrl+X)

# Reload the systemd daemon
sudo systemctl daemon-reload

# Restart the Pocketbase service
sudo systemctl restart pocketbase

# Restart the Nuxt.js service (assuming you have a service file for Nuxt.js in this case)
sudo systemctl restart nuxtjs

Payments

I use the Stripe API. The Stripe CLI is great, and I recommend that you don’t build your own payment flow…

I tend to use Gumroad as a payment provider too, which is a great experience for selling digital products. I highly recommend them too.

Emails

There are many email API providers. I use Postmarkapp. You can use Nodemailer or PHPMail. Use whatever works best for you. Postmarkapp’s API simplifies everything, and it’s had 100% hit rate thus far.

Optimisation

For PHP projects, I minimize JS and CSS files to keep things speedy. I also use gzip to compress files for faster loading.

Health checks

Create one document that has all your checklists on it, a global document that runs cronjob checks and is fed information by logs. If there’s an error, make sure it sends you an email or sends you a message through an API chat such as Telegram.

UptimeRobot. If your server goes down, UptimeRobot will report the 500 status message instantly to you.

I’ve had 99.999% server uptime, the downtime is when I update or break something.

DevOps

DevOps is a digital janitor job. It’s very important to keep things clean. You have to maintain things over time for scale, security and reliable hosting.

Command Line Tools

Fundamental tools for network troubleshooting and config:

ping helps verify network connectivity between your machine and another.

arp -a allows you to see the mapping of IP addresses to MAC addresses on your local network, which can help diagnose duplicate IP issues or unauthorized devices.

traceroute shows the path your data takes to reach a destination, helping identify where delays or failures occur in the network.

ifconfig, ipconfig, and ip address display network interface configurations, enabling you to check or modify your IP settings.

If I wanted to get more tracking, I could use Prometheus for collecting system metrics like CPU, memory, and response times. For logs, Loki stores and indexes events efficiently. If I needed to trace requests across services, Tempo would handle distributed tracing.

Grafana could visualize everything in dashboards, making it easier to spot issues. OpenTelemetry would help instrument applications to send metrics, logs, and traces to the right places.

To track Linux server performance at maximum settings, I’d set up Prometheus for resource metrics, Loki for logs, and Grafana for visualization.

The local way to check Nginx:

tail -f /nginx/access.log
tail -f /log/php8.1-fpm.log
journalctl -u nginx --since "10 minutes ago"

The local way to check CPU/RAM/Disk:

htop
df -h
free -m

Networks & Protocols

Start here if you no absolutely nothing: HTTP from Scratch.

I quickly learned that much of my work involves dealing with TCP (Transmission Control Protocol) ports. So here is what you will need to know: SMTP (Simple Mail Transfer Protocol) is used for sending emails and operates on port 25. HTTP (HyperText Transfer Protocol) is the foundation of data communication on the web and uses port 80. HTTPS (HyperText Transfer Protocol Secure) is the secure version of HTTP using SSL/TLS encryption and operates on port 443. FTP (File Transfer Protocol) transfers files between client and server and typically uses port 21 for control commands and port 20 (or other ports) for data transfer. SSH (Secure Shell) provides secure access to remote computers and operates on port 22.

Here is a list of keywords to search and read up on: Port Forwarding, TCP Ports, Switches and ARP (Address Resolution Protocol), Private IP Address Blocks, TCP/IP Address and Subnet Mask, TCP/IP Protocol Suite, Layer 3 Networking, Layer 2 Networking, MAC Address (Media Access Control Address), TCP/IPv6 (Transmission Control Protocol/Internet Protocol Version 6), Protocols, Caching, Routers and Default Gateways, Modems, Static IP Addresses, Firewalls, DHCP (Dynamic Host Configuration Protocol), DNS (Domain Name System), IP Addressing, Subnetting, Ports, SSH (Secure Shell), SSL (Secure Sockets Layer) / TLS (Transport Layer Security), UDP (User Datagram Protocol) / TCP/IP (Transmission Control Protocol/Internet Protocol) Protocols, HTTP (HyperText Transfer Protocol), HTTPS (HyperText Transfer Protocol Secure), FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), and NTP (Network Time Protocol).

SSL (Secure Sockets Layer) is the outdated name for TLS (Transport Layer Security). SSL/TLS are essentially the same in the context of securing communications, but TLS is the updated and more secure version.

Here is the list of port numbers, you’ll want to learn what is what.

Btop for process monitoring

This is task manager’s performance tab in Windows but in Linux. It’s called Top instead however I recommend using btop over top.

btop is even more modern than htop with a graphical and interactive UI. It adds visual representations for CPU, memory, disk and network data.

The btop command in action
sudo snap install btop

Breakdown of UFW Rules (Uncomplicated Firewall)

sudo ufw status verbose

sudo ufw status numbered

80,443/tcp: Allows inbound traffic on ports 80 (HTTP) and 443 (HTTPS), which are standard ports for web traffic. 22/tcp: Allows SSH traffic, enabling remote management of the server. This is crucial for administration but should be restricted to known IP addresses if possible to reduce the risk of unauthorized access. 3000/tcp: Often used for development environments or specific applications. If you’re running an app on this port that needs external access, this is fine. Otherwise, consider restricting it.

Ansible

You will need to learn YAML to write a script for Ansible. But what even is it?

Ansible goes beyond crontab by managing the entire lifecycle of your servers and applications. You can automate everything from database backups to setting up a new machine with firewalls, databases, users and the apps you use.

While managing a single server with rare changes might make Ansible seem like overkill, it’s a perfect tool for automating updates and upgrades consistently and efficiently.

I use Ansible to simplify my server management. Despite the learning curve, the time saved and reduced errors make it well worth the investment.

Below is a basic example that, while missing some (by some I mean lots of) configurations, demonstrates how you can automate everything in one setup file: server_deploy.yaml.

- name: automate server setup and manage config on debian
  hosts: webservers
  become: yes  # gain sudo privileges
  vars:
    nodejs_version: "14.x"  # Specify the Node.js version you want to install
    nginx_config_source: "./templates/nginx.conf.j2"  # Path to your Nginx template
    nginx_config_dest: "/etc/nginx/sites-available/default"  # Destination path on the server

  tasks:

  # update and upgrade the system
  - name: Update APT package index
    apt:
      update_cache: yes

  - name: Upgrade all packages to the latest version
    apt:
      upgrade: dist

  # install packages
  - name: Install essential packages
    apt:
      name:
        - curl
        - git
        - sqlite3
        - build-essential
      state: present

  # install node.js
  - name: Add NodeSource APT repository for Node.js 
    apt_repository:
      repo: "deb https://deb.nodesource.com/node_  main"
      state: present

  - name: Add NodeSource GPG key
    apt_key:
      url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key"
      state: present

  - name: Update APT package index after adding NodeSource repo
    apt:
      update_cache: yes

  - name: Install Node.js
    apt:
      name: nodejs
      state: present

  # install nginx
  - name: Install Nginx web server
    apt:
      name: nginx
      state: present

  # manage nginx setup
  - name: Deploy custom Nginx configuration
    template:
      src: ""
      dest: ""
      owner: root
      group: root
      mode: '0644'
    notify:
      - Reload Nginx

  # nginx enabled and started
  - name: Ensure Nginx is enabled at boot
    systemd:
      name: nginx
      enabled: yes

  - name: Ensure Nginx service is running
    systemd:
      name: nginx
      state: started

  # nginx configuration
  - name: Allow 'Nginx Full' profile through UFW
    ufw:
      rule: allow
      name: 'Nginx Full'
    when: ansible_facts['pkg_mgr'] == 'apt'

  # permissions
  - name: Ensure /var/www has correct permissions
    file:
      path: /var/www
      owner: www-data
      group: www-data
      mode: '0755'
      state: directory

Security

Code reviews are a regular part of my routine. It’s a useful way to catch mistakes and improve the overall quality of the code. Auditing your code and server with AI is highly recommended as an added filter.

Implement secure SSH key management on your VPS server by generating an SSH key pair on your local machine and adding the public key to the server’s authorized_keys file, make sure to do only key-based authentication, and disable password logins because brute forcing is easy. Create a user account and limit the user permissions, never use root.

I use UFW (Uncomplicated Firewall). Permit only essential ports to allow incoming connections on port 22 for SSH and ports 80 and 443 for HTTP/HTTPS traffic to your apps, with Nginx as your web server ofc.

I use Fail2Ban to help protect against brute-force attacks by temporarily banning IP addresses that show suspicious activity, such as multiple failed login attempts.

Install Fail2Ban:

sudo apt update && sudo apt install fail2ban

Create a local configuration file:

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Edit the local configuration file:

sudo nano /etc/fail2ban/jail.local

This is an example configuration that protects Nginx by monitoring the log at /var/log/nginx/error.log and protects SSH by monitoring the log at /var/log/auth.log.

[nginx-http-auth]
enabled = true
filter = nginx-http-auth
logpath = /var/log/nginx/error.log
maxretry = 5
bantime = 3600
findtime = 600
action = iptables-multiport[name=nginx, port="http,https", protocol=tcp]
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600

Nginx: locks an IP after 5 failed attempts within 600 seconds (10 minutes). Bans the IP for 3600 seconds (1 hour). Blocks both HTTP (port 80) and HTTPS (port 443) using iptables.

SSH: Blocks an IP after 3 failed SSH login attempts. Bans the IP for 3600 seconds (1 hour).

Start and enable Fail2Ban:

sudo systemctl start fail2ban
sudo systemctl enable fail2ban

Check log files here:

sudo tail -f /var/log/fail2ban.log

Check the status like this:

sudo fail2ban-client status nginx-http-auth

I use the built-in PDO when I work with databases in PHP. It’s reliable and helps keep everything neat and tidy. Every input and output must be sanitized. This means to clean up any data going into or out of the database to avoid SQL injection.

If your VPS has a ton of traffic and you’re unsure about security, consider hiring someone specialising in Cyber Security/DevOps for an audit. There are loads of talent who’ve put all their skill points in this skill tree.

I highly recommend using Yubico for hardware-enforced 2FA. Get two YubiKeys (primary and a backup).

Backups

When you get a VPS from providers like Linode or DigitalOcean, they usually offer multiple daily backups, which is super handy. But there’s more you can do to keep your data safe.

You can set up multi-region off-site backups too. Services like AWS S3, Google Cloud Storage, or Backblaze B2 let you store your data in different locations. This adds an extra layer of security and ensures you can recover it even if something goes wrong in one region.

Standby machine

You can configure a standby machine to be a mirrored replica of the main server.

# install mdadm for RAID management
sudo apt-get install mdadm

# create RAID 1 for SSDs (assuming /dev/sda and /dev/sdb)
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 
/dev/sda /dev/sdb

# format RAID array with ext4 filesystem
sudo mkfs.ext4 /dev/md0

# mount RAID array
sudo mkdir /mnt/raid1
sudo mount /dev/md0 /mnt/raid1

# ensure automatic mounting at boot
echo '/dev/md0 /mnt/raid1 ext4 defaults 0 0' 
| sudo tee -a /etc/fstab

Syncing data every 5 minutes using rsync:

*/5 * * * * rsync -avz /mnt/raid1/ user@standby_linode:/mnt/raid1/

Auth, OAuth and 2FA

When it comes to auth, I never save passwords or payment details, only emails, Sign in with Google, and the Stripe tokens. Yes I know not everyone prefers magic links but it has a great deal of benefits.

I think it’s better to take the heat from the outspoken haters for the majority of your users because of three core reasons: Uno, humans are very predictable and they use the same passwords over and over again, which isn’t secure. Dos, development efficiency, magic links streamline auth, and they’re readily available libraries on GitHub or Google reducing development effort. Tres, I argue that magic link auth is a better user experience overall because users don’t need to remember complex passwords.

auth is complicated

Despite most people being accustomed to passwords, I believe magic links are worthwhile due to the numerous issues with passwords. While it does feel like swimming against the current, the benefits of security and user experience make it the better choice. Email only or Google Sign in ftw.

Auth with email-only is essentially; login, add a randomized hash to your database, email the link to the user with the hash, they click it, login the user with a cookie, track cookie, and there is your auth.

Thoughts on frameworks

If you don’t have a framework “You’ll end up building your own shitty JS framework, and the last thing the world needs is another JS framework.” - Fireship.io

I have dabbled with most of the big frameworks to test and figure out what I wanted to develop with; Next.js, React.js (with the Router), SvelteKit, Nuxt.js (Vue.js), Laravel (Blade), Codeignitor, and I have learned that the best framework is the one that actually gets your product launched in a reasonable amount of time.

Here are a few videos on frameworks, languages and over-engineering to describe the mess of the web if you’re brand new.

But what should you use?

There are the tools and there are the goals, let’s not confuse them.

“What do you want to build?”. That is the question that matters. Tech necessarily doesn’t matter. I like Nuxt.js so I use Nuxt.js personally, that’s really why.

Index.html

It’s all logic, protocols and networking, fundamentally. Maybe you just need to serve a index.html and you’re good to go.

What VPS providers do you recommend?

The top tier VPS cloud realms are Linode, DigitalOcean, Hetzner (great in the EU), and Vultr.

How do you scale on a VPS?

You upgrade the instance to a more powerful one to get more compute allocation where and when you need it.

A VPS meme for good measure

VPS recommendation

Sign up to Linode with my referral and you’ll receive $100, 60-day credit.

Code resources

Learning to code? I made a handy list of links to get you started.