<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Andrew Magill, Web Engineer]]></title><description><![CDATA[Andrew Magill's Web Development project portfolio and professional blog]]></description><link>https://magill.dev</link><generator>RSS for Node</generator><lastBuildDate>Tue, 07 Apr 2026 15:03:15 GMT</lastBuildDate><atom:link href="https://magill.dev/feed/posts.xml" rel="self" type="application/rss+xml"/><pubDate>Tue, 07 Apr 2026 15:03:15 GMT</pubDate><copyright><![CDATA[All rights reserved, 2026 Andrew Magill, Web Engineer]]></copyright><language><![CDATA[en]]></language><item><title><![CDATA[AI Discoverability — Structured Data Gives Rich Context to Clueless Crawlers]]></title><description><![CDATA[
Apparently chatbots are the hot new target audience for everything, and unfortunately they're not impressed with your fancy frontend UI. So, if you want your content to show up in AI overviews, structured data provides that context to clueless bots and poorly informed AI workflows. It's a key part of machine discoverability, alongside other techniques like [boosting your Next.js blog's visibility with RSS](https://magill.dev/post/boosting-my-nextjs-blogs-visibility-with-rss).

## Structured Data & Micro-Schemas

For my purposes, I want to respectfully inform our new AI overlords that this page is an article and I'm the author. Schema.org, JSON-LD, and Microdata are the secret handshakes that get your content noticed by the machines. Without this, you're relying on scrapers to process your site content how you envisioned, which might not be the best bet.

## Auditing Your Content for Gaps

Before you go wild throwing JSON blobs everywhere, run a quick audit with tools like Google Rich Results Test or Schema Markup Validator.

Like me, you might find your blog is missing Article, Author, or Breadcrumb schemas. Thankfully, a simple solution is just a few copy-pastes away.

## Implementing Structured Data in Your Codebase

For my chosen solution, I didn't bother with fancy utility functions or helpers. I've kept things relatively simple for the [initial breakdown of this website's tech stack](https://magill.dev/post/lets-breakdown-my-website), so I'll just dangerously drop the JSON-LD schema into a script tag right in the page component:

```tsx
// app/post/[slug]/page.tsx

export default function Post({ post }) {
	return (
		<>
			<script
				type='application/ld+json'
				dangerouslySetInnerHTML={{
					__html: JSON.stringify({
						'@context': 'https://schema.org',
						'@type': 'Article',
						headline: post.title,
						author: { '@type': 'Person', name: post.author },
						datePublished: post.publishedAt,
						mainEntityOfPage: `https://magill.dev/post/${post.slug}`,
					}),
				}}
			/>
			// Post content
		</>
	);
}
```

Repeat this pattern for other content: blog indexes, FAQs, whatever. Just keep the schema close to the content. We can do this "dangerously" because I'm the only author. You'll need better precautions if you have user-generated content.

## Example: Structured Data for a Product Page

Structured data isn't just for articles and authors—it's useful for e-commerce and product listings too. For example, you could help search engines and AI understand your product details, pricing, and availability by embedding a Product schema:

```tsx
// app/product/[slug]/page.tsx

export default function Product({ product }) {
	return (
		<>
			<script
				type='application/ld+json'
				dangerouslySetInnerHTML={{
					__html: JSON.stringify({
						'@context': 'https://schema.org',
						'@type': 'Product',
						name: product.name,
						image: product.images,
						description: product.description,
						brand: { '@type': 'Brand', name: product.brand },
						offers: {
							'@type': 'Offer',
							priceCurrency: product.currency,
							price: product.price,
							availability: 'https://schema.org/InStock',
							url: `https://example.com/product/${product.slug}`,
						},
					}),
				}}
			/>
			// Product content
		</>
	);
}
```

This approach helps AI and search engines display rich product snippets, improving discoverability and reach of your content. Adapt the schema to match the available data and suit your purpose.

## Conclusion

I am not sure there is a way to reliably show up in AI overviews, but if you want bots to crawl your content effectively, you'll need to jump through some micro-schema hoops. So, go audit your site, drop in those schemas wherever you can, and help the bots give you back some ~~traffic~~ credit for your ~~content~~ effort.

---

### Related Links

- [Schema.org for Webmasters](https://schema.org/docs/gs.html)
- [Google Rich Results Test](https://search.google.com/test/rich-results)
- [Schema Markup Validator](https://validator.schema.org/)
- [JSON-LD Playground](https://json-ld.org/playground/)
]]></description><link>https://magill.dev/post/ai-discoverability-structured-data-gives-rich-context-to-clueless-crawlers</link><guid isPermaLink="false">https://magill.dev/post/ai-discoverability-structured-data-gives-rich-context-to-clueless-crawlers</guid><category><![CDATA[SEO]]></category><category><![CDATA[Microdata]]></category><category><![CDATA[AI]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 05:53:50 GMT</pubDate></item><item><title><![CDATA[Automating a Full-Stack, Multi-Environment Deployment Pipeline]]></title><description><![CDATA[
I have a confession to make, I am terrible at remembering all the different deployment checks and publishing chores for each of my projects. It's embarrassing, I know. Now that the air has been cleared, let's find a way to compensate for my shortcomings.

In my latest project, I'll set up a full-stack multi-environment deployment pipeline using [GitHub Actions](https://github.com/features/actions). This has been so incredibly useful, I decided to share some details about why I chose this configuration and the benefits of this approach:

## But Why?

![But Why?](/images/blog/but-why.jpg#right)

For my project, I needed a safe space to test code changes before they went live. Local development has its place, but you know it can be a pain for applications hosted on multiple environments. A staging platform provides a valuable resource for stakeholder reviews, facilitating regular feedback and deeper collaboration. This setup allows me to push changes where they are needed, and automagically perform any steps required for each environment. I shake my head when I think about all the time I wasted doing this manually.

## Laying the Pipeline

I organized my repository into separate branches to accommodate each environment: main for production and develop for staging. Don't forget, this is a full-stack app, with front and backend hosted on different environments. This pipeline uses two Sync-to-FTP actions with separate credentials to deploy both front and backend to their respective servers. If you've ever mistakenly pushed the wrong files to the wrong server, you understand how helpful this is.

To control each environment independently, we can use environment-specific configurations. My staging environment uses a separate database, different API keys, and its own settings. GitHub Actions repository secrets simplify automating anything that varies between environments, like [feature flags and API endpoints](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).

Conditional job execution allowed workflows to run differently depending on the branch. Staging can run a full suite of tests, providing confidence in the stability of the codebase. Production only gets a quick smoke test. To make it easier to inspect and debug code in the browser console, I disabled minification and enabled sourcemaps for the front-end on the staging environment. On production, minification was enabled to optimize performance and debug logging disabled to prevent accidental leaking of sensitive user data (I've written more about [stripping debug logs at build time here](https://magill.dev/post/strip-debug-logs-at-build-time-with-nextjs)).

## Access Control

One key benefit of this approach is access control. By granting developers access to specific branches, they automatically gain the ability to trigger deployments to the corresponding environments. Instead of juggling individual logins or shared credentials for each environment's hosting platform, I could manage access at the repository branch level. This not only streamlined onboarding and offboarding but also significantly improved security.

## Workflow Procedure

Workflows are triggered by pushes to the relevant branches. In my workflow, a push to develop triggers the staging deployment, and a push to main triggers the production deployment.

The front-end build process uses `npm run build` which runs the front-end build process (which is defined in package.json, silly). On staging, we can specify separate configuration files, with the `--config dev.config.js` flag to customize the build process more precisely. This back-end build uses a generic `composer install` action, which [could be customized further](https://github.com/ramsey/composer-install).

Here's a more detailed snippet, tying it all together:

```YAML
name: Deploy Main to LIVE FTP
on:
  push:
    branches:
      - main
jobs:
  FTP-Deploy-Action:
    name: Deploy to LIVE Action
    runs-on: ubuntu-latest
    steps:
    - name: Get latest code
      uses: actions/checkout@v4

    - name: Use Node.js 18
      uses: actions/setup-node@v4
      with:
        node-version: '18'

    - name: Build Front
      run: |
        npm install
        npm run build
      working-directory: ./front/

    - name: Sync Front Files
      uses: SamKirkland/FTP-Deploy-Action@v4.3.5
      with:
        server: ##.###.##.###
        username: frontuser
        password: ${{ secrets.front_ftp_password }}
        protocol: ftps
        local-dir: ./front/
        server-dir: /front/

    - name: Setup PHP
      uses: "shivammathur/setup-php@v2"
      with:
        php-version: "latest"

    - name: Build Backend
      uses: "ramsey/composer-install@v3"
      with:
        working-directory: ./back/

    - name: Sync Back Files
      uses: SamKirkland/FTP-Deploy-Action@v4.3.5
      with:
        server: ##.###.##.###
        username: backuser
        password: ${{ secrets.back_ftp_password }}
        protocol: ftps
        local-dir: ./back/
        server-dir: /back/
```

## To be continued...

This multi-environment deployment pipeline has been working great. The simplified access control and the ability to customize build processes for each environment have made deployments easier and faster, freeing me up for other stuff. Because everything is baked into the pipeline, I don't need to remember all the minutiae and procedures required to safely publish projects that use this approach. There are endless ways this approach could be adapted to other projects, and I'm eager to explore what else these methods can accomplish.

### Related Links

- [Official Documentation:](https://docs.github.com/en/actions) From Github
- [Using secrets in GitHub Codespaces:](https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/managing-development-environment-secrets-for-your-repository-or-organization) From Github
]]></description><link>https://magill.dev/post/automating-a-full-stack-multi-environment-deployment-pipeline</link><guid isPermaLink="false">https://magill.dev/post/automating-a-full-stack-multi-environment-deployment-pipeline</guid><category><![CDATA[Devops]]></category><category><![CDATA[CICD]]></category><category><![CDATA[Automation]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 03:01:04 GMT</pubDate></item><item><title><![CDATA[Boosting My NextJS Blog’s Visibility with RSS]]></title><description><![CDATA[
RSS (Really Simple Syndication) is a useful tool for publishers, bloggers, and creators to boost their content's visibility. By adding RSS to my static NextJS blog, I’m hoping to expand my website's reach and make it easier to share my content with a broader audience. So, let’s explore the benefits of RSS and how I’ve sprinkled it on my blog like confetti at a party!

### Reach and Discoverability

RSS is the standard solution for content syndication on the web, allowing content to be shared across multiple websites and social media channels with minimal effort. I plan to use this capability to [cross-post my content](https://dev.to/help/writing-editing-scheduling#Cross-posting-Content) to the developer community site, [Dev.to](https://dev.to). This should be a good method to gain some exposure and boost my online presence—because let’s face it, shouting into the void isn’t exactly effective.

This is just one method to help prevent my content from getting lost in the digital abyss. Content aggregators can expose my posts to potential readers who probably will not stumble on my blog through other means. This gives my site extra exposure and potential backlinks that could boost SEO credibility. It’s a win-win!

### The Right Tools

I'll use the `rss` npm library to generate an RSS feed into my static NextJS blog. It's a straightforward library that simplifies the process and integrates seamlessly with my project. This way, I can focus on creating content instead of wrestling with XML schemas or other maintenance headaches.

Now, let’s get down to configuring that shiny new RSS feed of ours.

**Generate Feed Content**: To start, we can create a utility function to generate our feed content

```javascript
import { Rss } from 'rss';
import { settings } from '@/utils/settings.mjs'; // site settings
import postService from '@/utils/PostService'; // post service utility

const getPostFeed = (posts = []) => {
	// set feed values from site settings
	const feed = new Rss({
		title: settings.title,
		description: settings.description,
		site_url: settings.siteUrl,
		feed_url: `${settings.siteUrl}/feed/posts.xml`,
		language: 'en',
		date: new Date(),
	});

	// Get post data
	posts = posts.length > 0 ? posts : postService.getPosts();

	posts.map((post) => {
		// add post data to feed
		feed.item({
			title: post.title,
			guid: `${settings.siteUrl}/post/${post.slug}`,
			url: `${settings.siteUrl}/post/${post.slug}`,
			date: post.created,
			description: post.description,
			author: post.author || settings.author,
			categories: post.categories || [],
		});
	});

	return feed;
};
export { getPostFeed };
```

You can check out my most recent version of that utility function in [this site's repo on GitHub](https://github.com/andymagill/dev.magill.next/blob/master/utils/feed.js). Someday I'll create a unit test for that function, pinky swear, but for now...

**Route the Feed**: Then, we can "feed" our blog data into an API route.tsx

```javascript
import { getPostFeed } from '@/utils/feed.mjs'; // the feed utility function from above

export async function GET() {
	const feed = getPostFeed();

	return new Response(JSON.stringify(feed.xml()), {
		headers: { 'Content-Type': 'application/json; charset=utf-8' },
	});
}
```

_Easy peasy lemon squeezy_, as the kids like to say. You can see my [latest implementation](https://github.com/andymagill/dev.magill.next/blob/master/app/feed/%5Btype%5D/route.tsx) of this includes a [dynamic route segment](https://nextjs.org/docs/pages/building-your-application/routing/dynamic-routes) to serve different versions of the post feed.

**Build for Production**: Finally, let's run the build process to kick out the jams :

```bash
pnpm run build
```

New posts and content changes will now auto-magically show up in [the feed.xml](https://magill.dev/feed/posts.xml). Excelsior!

### Next.js Dynamic Routing for RSS Feeds

Let’s see how the RSS feed is actually served in my Next.js app, using a dynamic API route. Here’s a simplified version of my `app/feed/[type]/route.tsx` file:

```typescript
import { NextResponse } from 'next/server';
import postService from '@/utils/PostService';
import { getPostFeed } from '@/utils/feed';

export const generateStaticParams = async (): Promise<{ type: string }[]> => {
	const params = ['posts.xml', 'posts.json'].map((type) => ({ type }));
	return params;
};

export async function GET(
	request: Request,
	{ params }: { params: { type: string } }
) {
	if (params.type === 'posts.xml') {
		// Serve up the RSS feed
		const feed = getPostFeed();
		return new Response(feed.xml(), {
			headers: {
				'Content-Type': 'application/rss+xml; charset=utf-8',
			},
		});
	} else if (params.type === 'posts.json') {
		// Serve up posts as JSON
		const posts = postService.getPosts();
		return new Response(JSON.stringify(posts), {
			headers: {
				'Content-Type': 'application/json; charset=utf-8',
			},
		});
	} else {
		return new Response('Not Found', { status: 404 });
	}
}
```

This route handles requests for both `/feed/posts.xml` (RSS) and `/feed/posts.json` (raw article data). When a request is made, it generates the RSS feed using the utility function and returns it with the correct content type. This approach leverages Next.js’s file-based routing and makes it easy to add or modify feed formats in the future (_I'm looking at you emerging AI protocols_).

## The Closing Tag

So, I’ve finally implemented RSS in my NextJS blog—because who doesn’t want to dive into the exciting world of content syndication, _am I right?_ Using the `rss` library to generate the feed at build-time was fairly straight-forward. As I publish fresh content, fingers crossed that this setup will help me reach an audience without resorting to begging or spamming.

### Related Links

- [What is an RSS feed? ](https://www.digitaltrends.com/computing/what-is-an-rss-feed/) From DigitalTrends
- [How Do RSS Feeds Work?](https://rss.com/blog/how-do-rss-feeds-work/) from RSS.com
- [Why am I still recommending the RSS in 2024?](https://medium.com/@kezhang404/why-am-i-still-recommending-the-rss-in-2024-33e270010829) on Medium
]]></description><link>https://magill.dev/post/boosting-my-nextjs-blogs-visibility-with-rss</link><guid isPermaLink="false">https://magill.dev/post/boosting-my-nextjs-blogs-visibility-with-rss</guid><category><![CDATA[RSS]]></category><category><![CDATA[SEO]]></category><category><![CDATA[NextJS]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 00:41:36 GMT</pubDate></item><item><title><![CDATA[Building a Flexible Modal Component in React]]></title><description><![CDATA[
Modal popups are a very common UI pattern that adds a lot of utility to modern web apps. Unfortunately for developers like us, that means we need to master all the technical complexities associated with them. What seems like a simple popup window actually involves a lot of intricate details: accessibility, responsive design, keyboard navigation, scroll management, and more.

For my current project, I needed something that could be reused throughout the application rather than reinventing the wheel each time. In this post, I'll walk through how we can create a flexible, reusable modal component that can render content, forms, or whatever else I need to show, on any device.

## What exactly do we need here?

A modal is best used to focus the user's attention on specific elements without navigating users away from the current page. The solution should be versatile enough to handle various use cases: terms and conditions, newsletter signups, contact forms, or notification alerts.

My wishlist looks something like this:

- **Accessibility**: The modal must be accessible and usable for keyboard and screen reader users
- **Scroll Management**: It should prevent scrolling of the underlying page, but allow scrolling within the modal when necessary
- **Flexibility**: The design must be simple enough to easily use anywhere, and adapt to different contexts and content types

## The Dialog Element, the Modern Standard

![Modal Component Example](/images/blog/modal-example.jpg#right)

Since 2024, the HTML `<dialog>` element has achieved excellent cross-browser support and now handles focus management, keyboard navigation, and accessibility out of the box. **This is the recommended approach for most use cases.** The native dialog provides:

- Built-in backdrop and `::backdrop` pseudo-element for styling
- Automatic focus trapping and restoration
- `Escape` key handling by default
- Better performance and less JavaScript overhead
- Reduced complexity and bundle size

Here's a clean React wrapper around the native `<dialog>` element:

```tsx
import { useRef, useEffect } from 'react';

const Modal = ({ isOpen, onClose, title, children }) => {
	const dialogRef = useRef<HTMLDialogElement>(null);

	useEffect(() => {
		const dialog = dialogRef.current;
		if (!dialog) return;

		if (isOpen) {
			dialog.showModal();
		} else {
			dialog.close();
		}
	}, [isOpen]);

	return (
		<dialog ref={dialogRef} className='modal' onClose={onClose}>
			<div className='modalHeader'>
				{title && <h2>{title}</h2>}
				<button
					className='closeButton'
					onClick={onClose}
					aria-label='Close modal'
				>
					×
				</button>
			</div>

			<div className='modalBody'>{children}</div>
		</dialog>
	);
};
```

### Styling the Dialog and Backdrop

The native dialog provides a `::backdrop` pseudo-element for styling the background. Here's a complete setup:

```scss
/* Dialog container */
.modal {
	padding: 2rem;
	border: none;
	border-radius: 8px;
	box-shadow: 0 10px 40px rgba(0, 0, 0, 0.3);
	max-width: 90vw;
	max-height: 90vh;
}

/* Background overlay */
.modal::backdrop {
	background-color: rgba(0, 0, 0, 0.5);
}
```

### Accessibility is Built-In

The native `<dialog>` element handles most accessibility concerns automatically:

- Focus management is handled by `showModal()` and `close()`
- Pressing `Escape` closes the dialog by default
- The browser manages focus trapping
- Screen readers recognize it as a dialog

### No Portals Needed

Unlike custom implementations, the native `<dialog>` element doesn't require React portals. The browser automatically layers dialogs above other content with `z-index: auto`, which is treated specially by the browser. You can place your modal component anywhere in your component tree without worrying about stacking contexts.

### Handling Long Content

For modals with extensive content, use a scrollable content area while keeping the header sticky:

```scss
.modal {
	display: flex;
	flex-direction: column;
}

.modalHeader {
	flex-shrink: 0;
	border-bottom: 1px solid #eee;
	padding-bottom: 1rem;
	margin-bottom: 1rem;
}

.modalBody {
	overflow-y: auto;
	flex-grow: 1;
}
```

The dialog automatically constrains itself to viewport size, so content inside scrolls naturally.

## Using the Modal

Here's the component in use. Click a button to open, and the dialog handles everything else:

```tsx
function Thumbnail({ title, description, image }) {
	const [isModalOpen, setIsModalOpen] = useState(false);

	return (
		<>
			<button onClick={() => setIsModalOpen(true)} className='thumbnail'>
				<img src={image} alt={title} />
			</button>

			<Modal
				isOpen={isModalOpen}
				onClose={() => setIsModalOpen(false)}
				title={title}
			>
				<div className='itemModalContent'>
					<img src={image} alt={title} />
					<p>{description}</p>
				</div>
			</Modal>
		</>
	);
}
```

### For Forms and Other Use Cases

The same component works for any content—forms, confirmations, notifications:

```tsx
function ContactSection() {
	const [isModalOpen, setIsModalOpen] = useState(false);

	return (
		<>
			<button onClick={() => setIsModalOpen(true)}>Get in Touch</button>

			<Modal
				isOpen={isModalOpen}
				onClose={() => setIsModalOpen(false)}
				title='Contact Us'
			>
				<ContactForm onSubmit={() => setIsModalOpen(false)} />
			</Modal>
		</>
	);
}
```

## The Closing Tag

The true value of building flexible modal components comes when you need to add new functionality. Whether you're wrapping the native `<dialog>` element or implementing a custom solution, reusable components ensure consistency, maintain accessibility standards, and let you focus on more important things.

Whether you're showing off your best cat photos, collecting emails nobody wants to give you, or displaying legal text no one wants to read, a well-built modal makes life better for everyone involved - especially future you, who doesn't have to build it again.

### Related Links

- [MDN: The Dialog Element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dialog) - Native HTML dialog documentation
- [React createPortal() Documentation](https://react.dev/reference/react-dom/createPortal) - For custom implementations
- [The A11Y Project - A guide to troublesome UI components](https://www.a11yproject.com/posts/a-guide-to-troublesome-ui-components/#modals) - Accessibility best practices
]]></description><link>https://magill.dev/post/building-a-flexible-modal-component-in-react</link><guid isPermaLink="false">https://magill.dev/post/building-a-flexible-modal-component-in-react</guid><category><![CDATA[React]]></category><category><![CDATA[Components]]></category><category><![CDATA[Accessibility]]></category><category><![CDATA[a11y]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 05:03:21 GMT</pubDate></item><item><title><![CDATA[Crafting a Developer Website For Professional Growth]]></title><description><![CDATA[
A professional website has become a tried-and-true method for developers and engineers to showcase their skills and experience. When [building my website](https://magill.dev/post/lets-breakdown-my-website), I thought it could be a worthy exercise to examine what makes a great professional website. There are endless examples of beautiful portfolios, and informative weblogs to offer insight and inspiration, but what's really interesting for employers and clients?

In a competitive job market, a strong portfolio can be your secret sauce. It's like a digital handshake, introducing you to potential employers and clients before even meeting them. Unlike a traditional resume, a portfolio lets you showcase your skills and experience in a dynamic, visual way. A polished, engaging portfolio can lead to all kinds of professional opportunities: job offers, consulting work, or freelance gigs. It's your chance to control the story of who you are as a developer and why people should want to work with you.

### Show Growth, Not Projects

A good professional website should not just show your best projects in a portfolio format, it's also about sharing your thought process and how you tackle challenges. **[Addy Osmani](https://addyosmani.com/blog/write-learn/)**, an Engineering Manager at Google, talks about how highlighting what you learned can set you apart: _"The practice of writing about learning encourages a growth mindset. It fosters curiosity, critical thinking, and a willingness to engage with challenging subjects."_ He suggests including details about the obstacles you faced, and how you solved them. This way, you're not just showcasing your coding skills; you're also highlighting your critical thinking and adaptability in your published work and previous experiences.

### Storytelling And Your Personal Brand

A professional blog is your chance to tell your story—think of it as your origin story, minus the radioactive spider bite. It's where your projects, creativity, and personality collide. As **[Brad Frost](https://bradfrost.com/blog/post/write-on-your-own-website/)**, creator of [Atomic Design](https://atomicdesign.bradfrost.com/), writes: _"Writing on your own website associates your thoughts and ideas with you as a person."_ A great professional blog shows how you think, how you solve problems, and why you're the kind of developer people want to collaborate with.

### The Self-Closing Tag

Your website should reflect your journey as a developer. As you build and maintain it, focus on incorporating current trends, addressing relevant challenges, and telling your unique story. With a thoughtful approach, your portfolio can become a powerful tool in showcasing not just what you do, but how well you can do it.

These ideas, along with the resources below, helped shape my thoughts on how I would build [my website](https://magill.dev/) and [work portfolio](https://magill.dev/projects).

### Related Resources

- [How to Build a Powerful Web Developer Portfolio](https://arc.dev/talent-blog/web-developer-portfolio/): Learn how to build an impressive professional portfolio
- [How to Build a Personal Brand as a Developer](https://cult.honeypot.io/reads/how-to-build-a-personal-brand-as-developer/): Building a personal brand is a fulfilling and practical tactic in today’s industry.
- [How to Build a Web Developer Portfolio](https://brainstation.io/career-guides/how-to-build-a-web-developer-portfolio): Build a Web Developer portfolio that will help boost your brand and attract new eyeballs to your work.
]]></description><link>https://magill.dev/post/crafting-a-developer-website-for-professional-growth</link><guid isPermaLink="false">https://magill.dev/post/crafting-a-developer-website-for-professional-growth</guid><category><![CDATA[Career]]></category><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 00:35:18 GMT</pubDate></item><item><title><![CDATA[Generate a Web App Manifest with Next.js]]></title><description><![CDATA[
The web app manifest is a simple way to reinforce the branding of your web application. In its most basic form, it’s a just JSON file that provides relevant metadata about your website, allowing browsers to present your app like a native application. This includes details like the app's name, icons, theme colors, and display preferences.

In this article, we’ll walk through how to create and implement a generated web app manifest in your Next.js application. So buckle up, because we're about to make your site look like it means business—without actually having to wear a suit.

## What the Heck is a Manifest?

Think of a web app manifest as your application's dating profile. It's where you showcase all your best features: your name (looking sharp!), your icons (hello, good looks), and how you want to be displayed (modest or full-screen drama?). This little JSON file tells browsers, "Hey, I'm not just another webpage. I have some personality."

## Using Generated Manifests in Next.js

Next.js provides a convenient way to generate a web app manifest dynamically using its metadata API. This approach allows you to customize your manifest based on your application's configuration or environment.
This allows browsers to present the web app similarly to native applications, enabling features like installation on the home screen and full-screen display.

### Implementation

Ready to get your hands dirty? Let’s dive into creating a [generated web app manifest](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/manifest#generate-a-manifest-file) in your Next.js project:

1. **Generate the Manifest File**:
   First things first, create a new file named `manifest.ts` or `manifest.js` in your `app` directory. Next.js treats `manifest.js` as a special Route Handler that is cached by default. This is where the magic happens.

   Here’s an example of what that code might look like:

   ```typescript
   import { MetadataRoute } from 'next';

   export default function manifest(): MetadataRoute.Manifest {
   	return {
   		name: 'My Next.js Application',
   		short_name: 'Next.js App',
   		description: 'A super-duper application built with Next.js',
   		start_url: '/',
   		display: 'standalone',
   		background_color: '#ffffff',
   		theme_color: '#000000',
   		icons: [
   			{
   				src: '/favicon.ico',
   				sizes: 'any',
   				type: 'image/x-icon',
   			},
   			{
   				src: '/icon-512x512.png',
   				sizes: '512x512',
   				type: 'image/png',
   			},
   		],
   	};
   }
   ```

   The example code above uses placeholder content, but you can easily add logic to pull in details from elsewhere in your application. You can see my [latest implementation](https://github.com/andymagill/dev.magill.next/blob/master/app/manifest.ts) of this on github, where I use a `settings` object that was already available for other functionality.

   But wait, it gets even better—Next.js will _automagically_ detect your `manifest.ts` or `manifest.js` file and add the appropriate `<link>` tag to your HTML's `<head>`. It’s like magic, but without the rabbits and top hats.

2. **Add Icon Files**:
   Now, let’s place the icon files directly in the `app` directory. Next.js will [detect these files](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/app-icons#image-files-ico-jpg-png) and generate the necessary `<link>` elements in the `<head>` of your application. You can use various file types including `.ico`, `.jpg`, `.jpeg`, `.png`, and `.svg`. Just make sure they’re high quality—nobody likes an ugly icon!

   If you need to create something quick and easy, I recommend the [generator on favicon.io](https://favicon.io/favicon-generator/) to create the actual icon files. If you want something super fancy, you can have a Next.js actually [generate the icon images](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/app-icons#generate-icons-using-code-js-ts-tsx) for you. Pretty cool, but out-of-scope for my purposes.

3. **Test Your Changes**:
   Finally, build and run your Next.js application. If you've done your job well, this will be very boring. Take a peek in the developer tools to verify that the manifest and icons are being served correctly. If everything looks good, congratulations! You’ve just leveled up your web app.

## The Closing Tag

I started this task simply because I wanted to update the favicon on [my professional website](https://magill.dev), and I somehow ended up in a rabbit hole of web manifests and PWA functionality. It's all part of the process of [crafting a developer website for professional growth](https://magill.dev/post/crafting-a-developer-website-for-professional-growth). By using a generated web app manifest in my Next.js site, I can reinforce my visual branding and provide an experience more similar to native apps. Next.js's built-in support for web manifests allows me to customize the manifest easily while creating a more engaging and accessible web application. Next.js's automatic handling of manifest and icon metadata simplifies the whole process, reducing the potential for human errors and laying the groundwork for more interesting PWA features (to be continued).

### Related Links

- [Web app manifests](https://developer.mozilla.org/en-US/docs/Web/Manifest) from MDN
- [Generate a Manifest file](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/manifest#generate-a-manifest-file) from Next.js
- [App Icon Metadata](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/app-icons) from Next.js
]]></description><link>https://magill.dev/post/generate-a-web-app-manifest-with-nextjs</link><guid isPermaLink="false">https://magill.dev/post/generate-a-web-app-manifest-with-nextjs</guid><category><![CDATA[Metadata]]></category><category><![CDATA[NextJS]]></category><category><![CDATA[PWA]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 01:00:10 GMT</pubDate></item><item><title><![CDATA[Happy Birthday to My Website]]></title><description><![CDATA[
Happy Birthday, [my dear Website](https://magill.dev/)! Today is a big day for you as we celebrate your re-launch, and I can’t help but feel proud of how far we’ve come together. Today marks a major milestone, for both of us. You're officially out of your awkward WordPress phase and stepping into your prime. I'm so excited to see what we can accomplish together.

I know, I know... it's been a difficult road, and we've had our ups and downs. You didn't have much to say back then, and your look was all over the place. But hey, we've all been there, right? I've let go of the past. But through all that, I learned a ton about what it takes to turn an average website into a great one. I've set you up for success, and I'm excited to see what the future holds.

We have significant challenges ahead, but I think we are ready to deal with whatever comes our way. I’m committed to spending more quality time with you, writing [meaningful blog posts](https://magill.dev/blog) and adding new features that are genuinely useful (unlike [this post](https://magill.dev/post/happy-birthday-to-my-website)). It feels a bit like gardening, and I hope this will grow. So here’s to you Website, old pal! May this birthday kick off a new adventure filled with opportunities. I can’t wait to see where this crazy journey takes us!

To anyone who took the time to read this nonsense, thank you. I'll do my best to earn the time you spend here.
]]></description><link>https://magill.dev/post/happy-birthday-to-my-website</link><guid isPermaLink="false">https://magill.dev/post/happy-birthday-to-my-website</guid><category><![CDATA[Milestone]]></category><category><![CDATA[Portfolio]]></category><category><![CDATA[Update]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 00:28:52 GMT</pubDate></item><item><title><![CDATA[Creating a JavaScript Debugging Utility to Guard Noisy Production Consoles]]></title><description><![CDATA[
The first step to recovery is admitting you have a problem. It starts with one `console.log()` and next thing you know, the console looks like an index of real world customer data. I don't want to feel like I work in a digital junkyard, so I built a reusable JavaScript logger utility that's ready for any environment, and knows when to shut up.

## The Solution: A Production-Guarded Logger

Creating a function that wraps `console.log()` gives us a single point of control for all our future logging needs. The most complex part of this function is a simple environment check. A lot of build tools like Webpack or Vite can inject a `process.env.NODE_ENV` variable that can be either _'development'_ or _'production'_. We'll use that to control logging behavior.

```javascript
class Logger {
	log(message, ...optionalParams) {
		if (process.env.NODE_ENV !== 'production') {
			const timestamp = new Date().toISOString();
			console.log(`[${timestamp}]`, message, ...optionalParams);
		}
	}
}
```

Now our new logging buddy will automatically go quiet in a live environment!

## **Adding Bells and Whistles**

If we want a truly useful logger, we should match the different levels of severity that what we have with the native `console` object. There's **info**, **warn**, **error**, and **debug**, each with a specific purpose. A single log function could take a level argument, a message, and dispatch the corresponding console method:

```javascript
const logger = (() => {
	// Check the environment once
	const isProd = process.env.NODE_ENV === 'production';

	// Return an object with methods for each log level
	return {
		debug: (message, ...optionalParams) => {
			if (!isProd) {
				console.debug('DEBUG:', message, ...optionalParams);
			}
		},
		info: (message, ...optionalParams) => {
			console.info('INFO:', message, ...optionalParams);
		},
		warn: (message, ...optionalParams) => {
			console.warn('WARN:', message, ...optionalParams);
		},
		error: (message, ...optionalParams) => {
			console.error('ERROR:', message, ...optionalParams);
		},
	};
})();
```

Notice I'm now using arrow functions to return the object methods, within an immediately invoking function so lazy me doesn't even need to initialize it. Also, I only production-guarded _debug_ messages, while _info_, _warn_, and _error_ remain active since they are typically useful for monitoring. Now I've got a rock solid, reusable logger I can use throughout my application.

```javascript
// Example usage:
logger.info('Flooding Torpedo Tubes...');
logger.debug('A secret value:', mySecretVariable);
logger.warn("Don't touch that!");
logger.error('Goodbye World!');
```

## The Closing Tag

Spending a little time to create a centralized logger, I've made my codebase cleaner and debugging life a lot easier. Now, I have a single place to control logging and maintain debugging consistency thoughout my application code. It's a simple change that helps improve the overall health of my projects.

## References

- **Console API** — MDN Web Docs  
  https://developer.mozilla.org/en-US/docs/Web/API/Console

- **process.env (Node.js)** — Node.js Documentation  
  https://nodejs.org/api/process.html#processenv

- **Logging best practices** — The Twelve-Factor App (logs as event streams)  
  https://12factor.net/logs
]]></description><link>https://magill.dev/post/javascript-debugging-utility-to-guard-noisy-production-consoles</link><guid isPermaLink="false">https://magill.dev/post/javascript-debugging-utility-to-guard-noisy-production-consoles</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Debugging]]></category><category><![CDATA[Observability]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 08:02:08 GMT</pubDate></item><item><title><![CDATA[Let's Breakdown This Website's Tech Stack]]></title><description><![CDATA[
When planning the implementation of [my new website](https://magill.dev/), I wanted something that could showcase my work experience in an impressive way. To accomplish this effectively, I selected a robust foundation of technologies designed to provide a platform that is fast (to build), easy (to publish) AND cheap (to host). Executing that plan produced [Magill.Dev](https://magill.dev) – a blend of modern frameworks, thoughtful design choices, and solutions to real-world development challenges. Let's take a deep dive into the tech stack:

## Stacking the Foundation

At the heart of this website lies a curated tech stack based on React and Next.js. React's component-based architecture is great for building a dynamic and interactive user interface. Next.js provided the backend and dev environment to build the site statically, which helps satisfy two of our initial requirements.

For styling, I opted for a combination of SASS and CSS Modules. SASS allowed me to write more maintainable stylesheets, while CSS Modules ensured that styles remained neatly scoped to specific components. To manage content, I chose [Markdown](https://magill.dev/simplified-content-management-with-markdown) for its simplicity and readability. This decision allowed me to focus on writing without getting bogged down in complex formatting.

## Summoning the Beast

Here are some of the challenges and problems I faced when building this website, and how I tackled them. From [handling SEO for AI crawlers](https://magill.dev/post/ai-discoverability-structured-data-gives-rich-context-to-clueless-crawlers) to [persisting animation states](https://magill.dev/post/persisting-animation-state-across-page-views-in-Reactjs), if you're diving into similar Next.js projects, you might find this useful:

### Dynamic Routes vs. Static Generation:

These two pillars of modern web development don't always play nice together. However, with the help of Next.js's generateStaticParams function, I managed to pre-render all my blog post pages at build time, making the website lightning fast.

```typescript
export const generateStaticParams = async () => {
	const posts = getSlugs();
	return posts.map((post) => ({ slug: post.slug }));
};
```

**More Details:**  
[Dynamic Routing](https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes) in NextJS
[Deplyoing Static Exports](https://nextjs.org/docs/app/building-your-application/deploying/static-exports) in NextJS

### Async File Operations:

Synchronous file operations are too slow for a lot of scenarios, including Next.js static exports. Thankfully, switching to asynchronous operations with fs.promises saved the day.

```typescript
import { promises as fs } from 'fs';

async function getPostContent(slug: string) {
	const content = await fs.readFile(`content/blog/${slug}.md`, 'utf8');
	// Process content...
}
```

**More Details:**  
[How to Load Data from a File in Next.js](https://vercel.com/guides/loading-static-file-nextjs-api-route)

### TypeScript, My Frenemy:

This love-hate relationship involved wrestling with type mismatches. Being more explicit with my types (like the Post or Project interfaces) helped tame this beast:

```typescript
interface Post {
	title: string;
	description: string;
	content: string;
	image: string;
	tags: string[];
	created: string;
}
```

**More Details:**  
[TypeScript in 5 Minutes](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html)

### Markdown Magic & Front Matter Hero:

Rendering Markdown content as HTML involved the awesome [markdown-to-jsx library](https://www.npmjs.com/package/markdown-to-jsx). Not only does this allow me to convert markdown to html auto-magically, it also allows me to insert react components and other JSX code into my blog articles. Extracting metadata from those Markdown files was made possible by the unsung hero, [gray-matter](https://www.npmjs.com/package/gray-matter).

```jsx
import matter from 'gray-matter';
import Markdown from 'markdown-to-jsx';
...
const { data, content } = matter(fileContents);
return <Markdown>{post.content}</Markdown>;

```

**More Details:**  
[Front-Matter Documentation](https://frontmatter.codes/docs)  
[Gettings Started with Markdown](https://www.markdownguide.org/getting-started/)

## Monumental Baby Steps

As I continue to develop this website, I have a handful of exciting ideas to improve it further. My highest priority is to regularly publish [interesting and informative blog content](https://magill.dev/blog). I also hope to expand the [project section](https://magill.dev/projects) of the site, showcasing more of my work in greater detail.

On the technical side, I'm looking to [enhance blogging functionality](https://github.com/andymagill/dev.magill.next/blob/master/ROADMAP.md) and UI. This might include features like improved search capabilities and tag-based filtering. For the project content, I'm exploring ways to create more engaging and dynamic presentations of my work.

Building this website has been an interesting journey. The power and flexibility of modern web development tools has enabled me to carefully tailor the development and editing experience. I'm curious where this site will take me, as I continue building it over the next few years.

### Related Links

- [Deploying Static Exports](https://nextjs.org/docs/app/building-your-application/deploying/static-exports#configuration) from NextJS.org
- [TypeScript in 5 Minutes](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) from TypeScriptLang.org
- [Introduction to Markdown](https://www.writethedocs.org/guide/writing/markdown/) from WriteTheDocs.Org
]]></description><link>https://magill.dev/post/lets-breakdown-my-website</link><guid isPermaLink="false">https://magill.dev/post/lets-breakdown-my-website</guid><category><![CDATA[React]]></category><category><![CDATA[NextJS]]></category><category><![CDATA[JAMstack]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 00:31:54 GMT</pubDate></item><item><title><![CDATA[Make Your Website Talk with The JavaScript Web Speech API]]></title><description><![CDATA[
If you have never heard your website speak, you are in for a real treat! I've spent enough time building and writing for my own site that I have decided to make it easy for people to listen to. Enter the Web Speech API, the best browser feature that most users never asked for.

## Why bother with a "listen" button?

I added this feature to [my blog](https://magill.dev) for a couple reasons. First, I selfishly wanted to experiment and learn about the Web Speech API. Secondly, accessibility: not everyone reads the same way, and some folks rely on screen readers or just prefer listening.

## The code

Here's a reusable function that only runs when the API is supported:

```javascript
function setupSpeechButton(contentSelector, buttonSelector) {
	// Get associated elements
	const button = document.getElementById(buttonSelector);
	const content = document.getElementById(contentSelector);

	// Escape this function if Web Speech API is not supported, or associated elements are missing
	if (!window.speechSynthesis || !button || !content) return;

	// Get the voice from document language
	function getPreferredVoice() {
		const htmlLang = document.documentElement.lang || 'en';
		const voices = window.speechSynthesis.getVoices();
		return voices.find((v) => v.lang.startsWith(htmlLang)) || voices[0];
	}

	function speakContent() {
		window.speechSynthesis.cancel();

		const utterance = new SpeechSynthesisUtterance(content.innerText);

		// Specify the voice based on language
		const voice = getPreferredVoice();
		if (voice) utterance.voice = voice;

		// toggle the button
		utterance.onstart = () => {
			button.disabled = true;
			button.textContent = 'Stop';
		};

		utterance.onend = () => {
			button.disabled = false;
			button.textContent = 'Listen';
		};

		// Speak the content
		window.speechSynthesis.speak(utterance);
	}

	// For browsers that load voices asynchronously
	if (window.speechSynthesis.getVoices().length === 0) {
		window.speechSynthesis.onvoiceschanged = () => {
			button.addEventListener('click', speakContent);
		};
	} else {
		button.addEventListener('click', speakContent);
	}
}

setupSpeechButton('blog-content', 'listen-btn');
```

_ABRACADABRA!_ If the user's browser supports speech synthesis, a "listen" button is rendered. If not, nothing happens. To see the latest version of the React implementation that I used on my own site, check out this [React component](https://github.com/andymagill/dev.magill.next/blob/master/app/components/blog/ListenButton.tsx).

## Closing tag

Adding a "listen" button with the Web Speech API is a simple way to make my blog more inclusive and engaging. It helps make my content more flexible for everyone, not just the visually impaired.

Since Chatbots invaded the internet over the past year, voice synthesis and transcription is about to become much more common-place. The Web Speech API is a small piece of the web, but it's a foundational one that will lead to better user experiences, and not just for AI-enhanced web apps.

---

## Related Links

- [Web Speech API - MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) : Comprehensive documentation and browser support information

- [SpeechSynthesis Interface](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis) : Detailed API reference for the speech synthesis functionality

- [Accessible Rich Internet Applications (ARIA)](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA) : Best practices for accessible web applications
]]></description><link>https://magill.dev/post/make-your-website-talk-with-the-javascript-web-speech-api</link><guid isPermaLink="false">https://magill.dev/post/make-your-website-talk-with-the-javascript-web-speech-api</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[API]]></category><category><![CDATA[a11y]]></category><category><![CDATA[UI]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 06:32:59 GMT</pubDate></item><item><title><![CDATA[Multi-lingual Routing via Proxy Layer in Next.js 16]]></title><description><![CDATA[
One of [my recent projects](https://markdownmixer.com) had an interesting mix of requirements: API-driven user authentication, SEO-friendly URLs, and multi-lingual translation. I used the Proxy Layer pattern in Next.js 16 as a central orchestrator for all app traffic to handle these overlapping concerns cleanly.

## **Three-way Route Classification**

Applying i18n logic globally can create duplicate URLs and performance delays. Instead, I classified every request into three distinct areas:

### **SEO-First Routes (The Translated Tier)**

For the homepage, /about, and /blog, the **URL is the absolute source of truth**. next-intl enforces locale URL prefixing (e.g., /fr/blog). Even if a user has a cookie suggesting one language, if they land on a French URL, the proxy respects it to ensure link integrity and [SEO consistency](https://magill.dev/post/ai-discoverability-structured-data-gives-rich-context-to-clueless-crawlers).

### **Application Routes (The Clean-URL Tier)**

For authenticated routes like /editor, /library, and /settings, the **source of truth is the cookie/session**. These URLs remain "clean" without locale prefixes. The proxy detects locale from headers or cookies, allowing the UI to localize without URL changes.

### **System Routes (The Auth Bypass)**

The auth callback route is sensitive—the browser must parse URL hash tokens without server-side interference to establish the session. A redirect could strip the authentication token.

I used Next.js middleware matcher configuration to exclude system routes entirely, eliminating unnecessary middleware execution for static assets, API routes, and auth callbacks:

```typescript
// proxy.ts
export const config = {
	matcher: ['/((?!api|_next|_vercel|auth/callback|images|icons|.*\\..*).*)'],
};
```

## **Locale Selection: Headers vs. Cookies**

The strategy for determining which locale to display differs by route type:

### **The Homepage Exception**

For users landing on the root `/`, the proxy detects locale from the browser's `Accept-Language` header—not cookies. This prevents "sticky" language redirects from a previous visit:

```typescript
// proxy.ts - Root homepage handler
async function handleRootHomepage(request: NextRequest, startTime: number) {
	const response = NextResponse.next();

	// Ignore cookies; use browser language detection only
	const preferredLocale = getBrowserOnlyLocale(request);
	response.headers.set('x-locale', preferredLocale);

	return handleAuth(request, response, {
		startTime,
		supabaseUrl: process.env.NEXT_PUBLIC_SUPABASE_URL,
		supabaseAnonKey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY,
	});
}

export function getBrowserOnlyLocale(request: NextRequest): Locale {
	const acceptLanguage = request.headers.get('accept-language');

	if (acceptLanguage) {
		const browserLanguages = acceptLanguage
			.split(',')
			.map((lang) => lang.split(';')[0].split('-')[0].trim())
			.filter((lang) => locales.includes(lang as Locale));

		return browserLanguages[0] || 'en';
	}

	return 'en';
}
```

## **Centralized Route Pattern Management**

Centralize all route patterns in one configuration object to make logic declarative and eliminate magic strings:

```typescript
// app/lib/proxy/route-utils.ts
export const ROUTE_PATTERNS = {
	ROOT: '/',
	LOCALIZED: /^\/[a-z]{2}(\/.*)?$/,
	ABOUT: /^\/about(\/.*)?$/,
	BLOG: /^\/blog(\/.*)?$/,
	EDITOR: /^\/editor/,
	LIBRARY: /^\/library/,
	SETTINGS: /^\/settings/,
};

export const ROUTE_CATEGORIES = {
	INTERNATIONALIZED: [
		ROUTE_PATTERNS.ROOT,
		ROUTE_PATTERNS.LOCALIZED,
		ROUTE_PATTERNS.ABOUT,
		ROUTE_PATTERNS.BLOG,
	],
	AUTH_TRACKED: [
		ROUTE_PATTERNS.EDITOR,
		ROUTE_PATTERNS.LIBRARY,
		ROUTE_PATTERNS.SETTINGS,
	],
};
```

## **Isolated Handler Modules**

The proxy delegates to specialized handlers rather than implementing everything inline. Each concern lives in its own module:

```typescript
// proxy.ts - Orchestration only
import { handleI18n, applyI18nMiddleware } from './app/lib/proxy/i18n';
import { handleAuth } from './app/lib/proxy/auth';
import {
	getPreferredLocale,
	getBrowserOnlyLocale,
} from './app/lib/proxy/locale';

export async function proxy(request: NextRequest) {
	const route = classifyRoute(pathname);

	if (route.isRoot) {
		return handleRootHomepage(request);
	} else if (route.shouldUseI18n) {
		return handleInternationalizedRoute(request);
	} else {
		return handleNonInternationalizedRoute(request);
	}
}
```

### **Tiered Preference Cascade**

For non-root routes, use a four-tier fallback system:

1. **Explicit Cookie** – User-selected via UI
2. **User Profile** – Auth-linked preference
3. **Browser Header** – Accept-Language
4. **Default** – English

```typescript
// app/lib/proxy/locale.ts
export function getPreferredLocale(request: NextRequest): Locale {
	// 1. Explicit locale cookie (user-selected via UI)
	const cookieLocale = request.cookies.get('NEXT_LOCALE')?.value;
	if (cookieLocale && locales.includes(cookieLocale)) {
		return cookieLocale;
	}

	// 2. User profile preference (authenticated users)
	const userPreference = request.cookies.get('USER_LANGUAGE_PREFERENCE')?.value;
	if (
		userPreference &&
		userPreference !== 'auto' &&
		locales.includes(userPreference)
	) {
		return userPreference;
	}

	// 3. Browser Accept-Language header
	const browserLocale = getBrowserOnlyLocale(request);
	if (browserLocale !== 'en') {
		return browserLocale;
	}

	// 4. Default fallback
	return 'en';
}
```

The cookie is only updated when the detected preference differs from the current value, minimizing header writes:

```typescript
// app/lib/proxy/i18n.ts
export function setLocaleCookieForResponse(
	request: NextRequest,
	response: NextResponse
): NextResponse {
	const preferredLocale = getPreferredLocale(request);
	const currentCookie = request.cookies.get('NEXT_LOCALE')?.value;

	// Only set if changed (performance optimization)
	if (currentCookie !== preferredLocale) {
		response.cookies.set('NEXT_LOCALE', preferredLocale, {
			httpOnly: false,
			secure: process.env.NODE_ENV === 'production',
			sameSite: 'lax',
			maxAge: 60 * 60 * 24 * 365,
		});
	}

	return response;
}
```

## **Route Classification Logic**

The proxy classifies routes before processing them to determine which handling pattern to apply:

```typescript
// app/lib/proxy/route-utils.ts
export function classifyRoute(pathname: string): RouteClassification {
	const isRoot = pathname === '/';
	const isLocalized = /^\/[a-z]{2}(\/.*)?$/.test(pathname);

	const shouldUseI18n =
		isRoot ||
		isLocalized ||
		/^\/about(\/.*)?$/.test(pathname) ||
		/^\/blog(\/.*)?$/.test(pathname);

	const shouldCheckAuth = matchesAnyPattern(
		pathname,
		ROUTE_CATEGORIES.AUTH_TRACKED
	);

	return { isRoot, isLocalized, shouldUseI18n, shouldCheckAuth };
}
```

This classification ensures that the proxy acts as a high-speed filter, directing each request to the appropriate handler without unnecessary processing.

## **Conclusion**

By classifying routes before processing, the proxy becomes a high-speed filter rather than a bottleneck. Each request reaches the appropriate handler with minimal overhead, ensuring auth-tokens remain intact, crawlers get the right content, and users see their preferred language without broken URLs.

## Related Links

- [Next.js Routing and Dynamic Routes Documentation](https://nextjs.org/docs/app/building-your-application/routing): Official Next.js documentation on routing strategies and dynamic route handling.
- [next-intl Library](https://next-intl-docs.vercel.app/): Comprehensive guide to implementing internationalization (i18n) in Next.js applications.
- [Supabase Auth Documentation](https://supabase.com/docs/guides/auth): Learn about API-driven authentication flows and managing auth callbacks in your application.
]]></description><link>https://magill.dev/post/multi-lingual-routing-via-proxy-layer-in-nextjs</link><guid isPermaLink="false">https://magill.dev/post/multi-lingual-routing-via-proxy-layer-in-nextjs</guid><category><![CDATA[internationalization]]></category><category><![CDATA[middleware]]></category><category><![CDATA[multilingual routing]]></category><category><![CDATA[i18n]]></category><category><![CDATA[Next.js]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 11:12:07 GMT</pubDate></item><item><title><![CDATA[Packaging Helpers with Types & Tests for a Dependable TypeScript Toolbox]]></title><description><![CDATA[
Small utility functions — helpers — become force multipliers when they can be reliably shared and maintained across projects. These tiny workhorses streamline development, improve maintainability, and make developer lives easier. This workflow packages helpers with TypeScript types, focused unit tests, a clean export surface, and a lean build, so fixes land once and propagate everywhere. The goal is a compact, repeatable toolbox that stays predictable as it grows.

## Best Practices for Packaging Helpers

- Single responsibility. Each helper does one thing and composes cleanly. This minimizes breakage risk and keeps surface area obvious when reusing across apps.
- Predictable inputs/outputs. Validate inputs, normalize types (e.g., Date | string), and either return a sensible default or throw with a clear message. This helps prevent hidden consumer bugs.
- Types and tests. Provide TypeScript declarations and a small test per helper. This catches misuse at compile time and protects against regressions when refactoring.
- Concise docs. Keep README short: install, API, a few usage snippets, and brief upgrade notes for breaking changes. The target is low maintenance, not exhaustive documentation chores.

## Practical use-cases for Helpers

Imagine you want to maintain a small set of functions across several frontends: a date formatting helper, a tiny analytics-event normalizer, or even a [specialized debugging utility to guard production consoles](https://magill.dev/post/javascript-debugging-utility-to-guard-noisy-production-consoles). Duplicating logic across repos introduces drift and bugs. Packaging those helpers lets you fix issues once and consume consistent behavior everywhere.

Export the helpers to provide a stable API surface. Example (index.js):

```javascript
export { analyticsEvent } from './events/analyticsEvent';
export { formatUtc } from './date/formatUtc';
```

Publish your utilities as a package (for example (@your-scope/utils) and consumers can install what they need without pulling the whole toolbox:

### Install with npm

```bash
npm install @your-scope/utils
```

Then you may import your helpers with @your-scope, wherever needed:

```javascript
// component/someComponent.js or lib/library.js
import { analyticsEvent } from '@your-scope/events/analyticsEvent';
import { formatUtc } from '@your-scope/date/formatUtc';
```

### Toolbox layout

Favor domain-based folders over a flat “utils” dump. This scales without creating a grab-bag.

```markdown
src/
date/
formatUtc.ts
events/
analyticsEvent.ts
string/
slugify.ts
index.ts
```

## Package example

Here is how to structure the package.json to specify details about filenames, types, building and testing:

```json
{
	"name": "@your-scope/helpers",
	"version": "0.1.0",
	"type": "module",
	"main": "dist/cjs/index.js",
	"module": "dist/esm/index.js",
	"files": ["dist", "README.md"],
	"types": "dist/index.d.ts",
	"scripts": {
		"build": "rollup -c",
		"test": "node ./test/run.js"
	}
}
```

### TypeScript-first approach

Prefer authoring in TypeScript and emitting declarations automatically.

tsconfig.json (library-focused essentials):

```json
{
	"compilerOptions": {
		"target": "ES2020",
		"module": "ESNext",
		"declaration": true,
		"declarationMap": true,
		"outDir": "dist/esm",
		"moduleResolution": "Bundler",
		"strict": true,
		"skipLibCheck": true,
		"emitDeclarationOnly": true,
		"stripInternal": true
	},
	"include": ["src"]
}
```

## Closing

Packaging helpers can make small pieces of code more maintainable, reliable, and discoverable across projects. The next time you find repetitive code or a domain-specific need, pause and ask, "Is it better to split out this functionality into a reusable helper?" Knowing when to use your own utilities improves your workflow and code quality.

---

### Related Links

- https://nodejs.dev/learn/publishing-nodejs-packages
- https://www.typescriptlang.org/docs/handbook/declaration-files/publishing.html
- https://rollupjs.org/guide/en/
]]></description><link>https://magill.dev/post/packaging-helpers-with-types-and-tests-for-a-dependable-typescript-toolbox</link><guid isPermaLink="false">https://magill.dev/post/packaging-helpers-with-types-and-tests-for-a-dependable-typescript-toolbox</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Utilities]]></category><category><![CDATA[Best Practices]]></category><category><![CDATA[Frontend]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 10:11:42 GMT</pubDate></item><item><title><![CDATA[Persisting Animation State Across Page-Views In React.js]]></title><description><![CDATA[
I see the hero animation on my website often enough that the tiny imperfections started to drive me crazy. Given that [my site is built](https://magill.dev/post/lets-breakdown-my-website) with Next.js SSG (Static Site Generation), the animation would reset to its "Day 1" state when a user navigates to a new page 😭. In contrast to the smooth animations and persistent state of single page apps, my hero animation felt choppy and repetitive.

So what does a better way look like? For me it was a mix of local storage, seeded randomization, and CSS variables. Here’s how I pulled it all together.

## Tracking Styles with Local Storage

To keep things consistent, I needed to track randomized values and current appearance with local storage. Instead of saving every frame, I just store the elapsed duration and some initial presets (colors, positions, etc.) generated by a seededRandom helper.

Each radial-gradient "particle" in the animation gets its own base hue, size, and—crucially—a negative baseDelay, to position it in the animation timeline. By storing the exact moment the animation first started, I can calculate exactly how much time has passed since the user first landed on the site. The component calculates elapsedSeconds and subtracts that from the animation delay.

I use `requestAnimationFrame` to wrap these updates. It keeps React from yelling at me about synchronous renders while ensuring the animation stays in sync.

### State Variables into Style Variables

Once JavaScript figures out where the animation should be, it passes those values to CSS custom properties. I use useMemo to keep this efficient:

```JavaScript
const styleVars = useMemo(() => {
  if (!animationState || elapsedMs === null) return {};

  return {
    '--animation-offset-1': `${animationState.offsets.o1}%`,
    '--animation-color-1': `${animationState.colors.c1}%`,
    '--animation-delay': `-${elapsedMs}ms`, // The magic "rewind"
  } as React.CSSProperties;
}, [animationState, elapsedMs]);
```

The CSS takes it from here. Using `@property` rules and keyframes, the browser handles the heavy lifting of interpolating colors and movement. By setting a negative animation-delay, the browser effectively "fast-forwards" the animation to exactly where it should be.

```scss
@property --gradient-angle {
	syntax: '<angle>';
	initial-value: 160deg;
	inherits: false;
}

@property --gradient-stop-0-offset {
	syntax: '<percentage>';
	initial-value: 0%;
	inherits: false;
}

@property --gradient-stop-1-offset {
	syntax: '<percentage>';
	initial-value: 50%;
	inherits: false;
}

.heroAnimation {
	animation: gradient-animation 12s ease-in-out infinite;
	animation-delay: var(--animation-delay, 0ms);
	background: linear-gradient(
		var(--gradient-angle),
		var(--gradient-color-0)
			calc(var(--gradient-stop-0-base, 0%) + var(--gradient-stop-0-offset, 0%)),
		var(--gradient-color-1)
			calc(var(--gradient-stop-1-base, 60%) + var(--gradient-stop-1-offset, 0%))
	);
}

@keyframes gradient-animation {
	from {
		--gradient-angle: 160deg;
	}
	to {
		--gradient-angle: 42deg;
	}
}

@keyframes particle-drift {
	from {
		transform: translate3d(var(--particle-start, -50vw), 0, 0);
	}
	to {
		transform: translate3d(var(--particle-end, 120vw), 0, 0);
	}
}
```

## Performance & Polish

I'm not trying to crash anyone's browser, so we need a smart approach to handling visual complexity. Persisting the start timestamp plus the seed means returning sessions don't replay the animation from zero—they simply subtract the elapsed clock and set a negative delay via CSS, mimicking an ongoing loop. Since the state values are seeded and calculated once, the background can render as fast as the browser can read local storage.

By using `will-change`, `transform`, and GPU-driven keyframes, the animation stays buttery smooth even on crusty old phones. JavaScript just handles the "math" at the start, and CSS handles the "art" for the rest of the session.

## Moving Forward

If we take the time to get it right, there doesn't need to be a compromise between static, appealing, and performant animations. By using seeded randoms and local storage, we can give a static site the "soul" of a persistent application. The hero background on my site is no longer just a random loop; it's a continuous, evolving part of the user's journey.

Whether you're building a personal portfolio or a complex dashboard, remember that the best animations are the ones that respect the user's time, and the browser's main thread.

You can see the latest version of my persistent animation implementation here : https://github.com/andymagill/dev.magill.next/blob/master/app/components/global/HeroAnimation.tsx

### Related Links & Resources

- [MDN: Using CSS Custom Properties](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) – A deep dive into the variables doing the heavy lifting here.
- [The Magic of Seeded Randoms](https://www.freecodecamp.org/news/seeded-random-number-generator-in-javascript/) – Why `Math.random()` is usually the wrong choice for persistent UI.
- [W3C: CSS Houdini & @property](https://www.w3.org/TR/css-properties-values-api-1/#at-property-rule) – How to create type-safe CSS variables for smoother transitions.
]]></description><link>https://magill.dev/post/persisting-animation-state-across-page-views-in-Reactjs</link><guid isPermaLink="false">https://magill.dev/post/persisting-animation-state-across-page-views-in-Reactjs</guid><category><![CDATA[React]]></category><category><![CDATA[Animation]]></category><category><![CDATA[Frontend]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 11:06:28 GMT</pubDate></item><item><title><![CDATA[Row Level Security in Serverless PostgreSQL for HIPAA Compliance]]></title><description><![CDATA[
It's time to revisit everyone's two favorite topics, Row Level Security (RLS) and HIPAA compliance. I'm here to give the people what they want, so here is my take on how to create a safe and orderly place for your legally-protected patient data to live.

If you’re building a patient-focused web app and you’re not thinking about HIPAA compliance, you haven't seen the [penalty structure for violations](https://www.ama-assn.org/practice-management/hipaa/hipaa-violations-enforcement#:~:text=HIPAA%20violation:%20Unknowing,imprisonment%20up%20to%201%20year.). For the rest of us, protecting patient data isn’t just a checkbox—it’s a survival skill.

### What the Heck is Row Level Security, and Why Should You Care?

Row Level Security (RLS) is PostgreSQL’s way of saying, “Welcome, but stay in your assigned space.” Your users become kinda like guests in a hotel, only if door locks were as cool as SQL policies. RLS lets you centralize your access logic, so you can focus on giving your guests a great experience.

And yeah, it’s a HIPAA win: RLS helps you enforce the “minimum necessary” access rule, so you’re not handing out master keys when someone just needs access to one room.

### Shared Policies Using Many-to-Many Relationships

Row Level Security in PostgreSQL is powerful enough to handle even complex relationships like many-to-many mappings between clinicians and patients. By leveraging join tables and smart policies, you can ensure HIPAA compliance while maintaining a scalable and secure database structure. We'll have 3 tables; `patients`, `clinicians`, and `clinicians_patients`.

## 1. Create Policies for Clinicians

Let's say there's a many-to-many relationship between clinicians and patients managed through a `clinicians_patients` join table. We want clinicians to only see their own patients, but not others. Here's how we can get there:

```sql
CREATE POLICY clinician_patient_access ON patients
  FOR SELECT, UPDATE
  USING (EXISTS (
    SELECT 1
    FROM clinicians_patients
    WHERE clinicians_patients.patient_id = patients.id
      AND clinicians_patients.clinician_id = current_setting('app.current_user')::int
  ));

CREATE POLICY clinician_patient_delete ON patients
  FOR DELETE
  USING (EXISTS (
    SELECT 1
    FROM clinicians_patients
    WHERE clinicians_patients.patient_id = patients.id
      AND clinicians_patients.clinician_id = current_setting('app.current_user')::int
  ));
```

This policy works by checking if the `clinician_id` in the `clinicians_patients` join table matches the current user's session variable. To make this work, your application must set the `app.current_user` session variable to the clinician's ID upon authentication (more on that in a second).

## 2. Enable RLS on Your Tables

We still need to tell PostgreSQL to actually care about row-level access. By default, it's blissfully ignorant. Enable RLS on all three tables:

```sql
ALTER TABLE patients ENABLE ROW LEVEL SECURITY;
ALTER TABLE clinicians ENABLE ROW LEVEL SECURITY;
ALTER TABLE clinicians_patients ENABLE ROW LEVEL SECURITY;
```

### One RLS Policy to Rule Them All

By default, superusers and table owners can bypass RLS, which can be risky in serverless setups where connections are shared. To lock down access completely, force RLS on sensitive tables:

```sql
ALTER TABLE patients FORCE ROW LEVEL SECURITY;
ALTER TABLE clinicians FORCE ROW LEVEL SECURITY;
ALTER TABLE clinicians_patients FORCE ROW LEVEL SECURITY;
```

This ensures all access follows your RLS policies, even for privileged users. In serverless environments, this step is crucial to protect sensitive data and maintain compliance. Now, not even the table owner can bypass your policies.

## 3. Serverless Gotchas

Serverless PostgreSQL is stateless, so we can’t rely on sticky sessions or nerd magic. We'll need to establish [PostgreSQL session variables](https://www.postgresql.org/docs/current/runtime-config-client.html) at the start of each connection. Our app’s authentication layer should handle this — _don’t trust anyone!_. But since we're cool, here's the deets:

### Set the PostgreSQL Session Variable

In your app, set the user session after successfully establishing a connection:

```javascript
// Node.js example with pg library
const { Client } = require('pg');

async function setSessionVariable(userId) {
	const client = new Client({ connectionString: process.env.DATABASE_URL });

	// Set the session variable for the current user
	await client.query('SET SESSION "app.current_user" = $1', [userId]);
}
```

### Is All That Really Necessary?

Setting session variables at the start of each connection makes sure that user-specific context is explicitly defined. This context is critical for enforcing RLS policies, which depend on session variables to determine which rows a user can access. Without session variables, we're missing the necessary context to apply our shiny new polices and access controls.

## Conclusion

Row Level Security in PostgreSQL isn't just a neat trick—it's a practical, scalable way to remain HIPAA-compliant without losing your mind (or your patients' data). In a serverless world, it's even more important to simplify access logic, to prevent unforeseen challenges from becoming critical failures.

With some thoughtful RLS policies, we can let PostgreSQL do the heavy lifting, while we sit back and admire what we accomplished. And if someone asks why you’re so calm about HIPAA audits, just wink and say, _“It’s all in the rows, my friend.”_

**Further Reading:**

- [PostgreSQL RLS Documentation](https://www.postgresql.org/docs/current/ddl-rowsecurity.html)
- [HIPAA Security Rule Summary (HHS.gov)](https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html)
- Serverless PostgreSQL Providers: [Neon](https://neon.tech/), [Supabase](https://supabase.com/), [AWS Aurora](https://aws.amazon.com/rds/aurora/serverless/)
]]></description><link>https://magill.dev/post/row-level-security-in-serverless-postgresql-for-hipaa-compliance</link><guid isPermaLink="false">https://magill.dev/post/row-level-security-in-serverless-postgresql-for-hipaa-compliance</guid><category><![CDATA[Serverless]]></category><category><![CDATA[HIPAA]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 04:38:49 GMT</pubDate></item><item><title><![CDATA[Simplified Content Management with Markdown for Bots and Humans]]></title><description><![CDATA[
As long as content exists, content management will be an important factor for online publishers, bloggers and developers, alike. Some tools like no-code platforms level the playing field with big applications that can render content in endless ways. But what if we didn't need a big application to format our content?

One tool that has gained popularity for simplifying this process is Markdown. In this post, we'll discuss the benefits of using Markdown and how you can incorporate it into your projects. I'm such a fan, that I wrote this very article in Markdown for [my own blog ](https://magill.dev/blog)!

### Benefits of Using Markdown

**Simplicity and Ease of Use**  
Markdown can be as simple and user-friendly as plain text, making it easy to learn and implement. The straightforward syntax lets you focus on writing rather than wrestling with formatting. This eliminates those moments of "it looked different on my computer" that comes from abstracting presentation from content.

**Drag It Anywhere Without Breaking**  
One of Markdown's standout features is portability. Since Markdown _is_ plain text, it can be easily stored and transmitted across platforms and tools without losing structure or formatting. This helps avoid the compatibility headaches often encountered with more complex formats like those from MS Office and Google Docs.

**Version Control Without the Meltdowns**  
Markdown works seamlessly with version control systems like Git. This compatibility is invaluable for collaborative projects, allowing multiple contributors to track changes easily. You can manage documentation or content updates without the hassle of formatting conflicts. Your blood pressure will thank you.

### Where Markdown Shines Brightest

**Documentation**  
Markdown is a great way to format technical information in README files, project documentation, and user guides. Its clarity makes it ideal for maintaining accessible documentation that can grow alongside your project. It's natively supported on GitHub, Confluence, Google Docs, Notion, and many more knowledgebase management platforms. _If ya can't beat em, use their tech stack_, I always say (not really).

**Blogging**  
Many blogging platforms support Markdown, enabling a streamlined writing process for your posts. In fact, this very [blog post](https://magill.dev/post/simplified-content-management-with-markdown) was crafted using Markdown! By integrating it into my blogging workflow, I can create content quickly while enjoying instant formatting feedback.

### Markdown with Metadata

If you are concerned about being limited to blobs of formatted text, I would remind you about [grey matter](https://www.npmjs.com/package/gray-matter) for markdown. Grey matter allows you to append structured data to to the top of your markdown content, for stashing all sorts of useful info. Tags, publication dates, author info, or whatever:

```markdown
---
title: 'Best Article Ever'
date: '2025-04-18'
author: 'Yours Truly'
tags: ['markdown', 'tech', 'rants']
image: 'images/amazing-picture.jpg'
featured: true
---

Your awesome content _starts here..._
```

### Markdown & AI Are Like Peanut Butter & Bananas

Let's talk about the elephant in the room: AI, and guess what? AI does a great job working with Markdown. AI needs structured but flexible ways to understand and generate content, and Markdown hits that sweet spot perfectly. Unlike more complex formats like HTML or XML, Markdown strikes a useful balance.

When I'm using ChatGPT or Claude to help draft content (like [this article](https://magill.dev/post/row-level-security-in-serverless-postgresql-for-hipaa-compliance)), I ask for Markdown output. Because I can drop it straight into my workflow without playing a game of "fix the formatting." Plus, most AI tools are trained on tons of Markdown documentation, so they understand the syntax better than any human.

### The Closing tag

Look, I'm not saying Markdown will change your life or make you a better person or anything. But it would be silly to ignore the advantages Markdown offers for content formatting and management. Its simplicity, portability, and compatibility with 3rd party platforms, make it a useful tool in a lot of situations. I've written a little more about [how I've implemented Markdown](https://magill.dev/post/lets-breakdown-my-website) on my [blog](https://magill.dev/). If any of this sounds interesting, maybe it is time to consider using in your own projects.

### Related Links

- [Introduction to Markdown](https://www.writethedocs.org/guide/writing/markdown/) from WriteTheDocs.Org
- [Basic writing and formatting syntax](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) from GitHub
- [Markdown Documentation](https://www.codecademy.com/resources/docs/markdown) from Codecademy.
]]></description><link>https://magill.dev/post/simplified-content-management-with-markdown</link><guid isPermaLink="false">https://magill.dev/post/simplified-content-management-with-markdown</guid><category><![CDATA[Markdown]]></category><category><![CDATA[CMS]]></category><category><![CDATA[JAMstack]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 00:37:35 GMT</pubDate></item><item><title><![CDATA[Strip Debug Logs at Build Time with Next.js Compiler Options]]></title><description><![CDATA[
I never thought I would recommend anyone get into stripping, but sometimes you gotta do what you gotta do. My [last article](https://magill.dev/post/javascript-debugging-utility-to-guard-noisy-production-consoles) described a tamer approach, runtime guarding of debug logs. But if you really want clean logs and easier debugging, the simplest solution might just be to remove debug calls during the build process. Instead of wrapping every `console.debug()` in guards at runtime, you can tell the Next.js compiler to strip them out when you create your production build.

## Why build-time removal?

- Zero runtime overhead — no environment checks on every call.
- Debug statements can't accidentally leak because they're not in the final bundle.
- Keeps source code readable without littering it with guards.

This is a lot simpler than the [runtime-guarded logger in my previous article](https://magill.dev/post/javascript-debugging-utility-to-guard-noisy-production-consoles), which keeps calls in the bundle but silences them in production. Build-time removal deletes the calls from the bundle entirely — more permanent, less flexible.

## Next.js compiler: removeConsole

Next.js exposes a compiler option that can remove console calls during the build. You don't need Babel or a Webpack plugin for this — the compiler can do it for us. I just need to update the `next.config.js` file:

```javascript
// filepath: /next.config.js
module.exports = {
	compiler: {
		// Remove console.* calls in production, but keep error and warn
		removeConsole:
			process.env.NODE_ENV === 'production'
				? { exclude: ['error', 'warn'] }
				: false,
	},
};
```

Next time you build with this config, _console.log_, _console.debug_, and _console.info_ will be removed from the built client bundles, while _console.error_ and _console.warn_ remain in the shipped code.

## Benefits vs runtime guarding

- No more runtime checks, smaller/cleaner production bundles.
- Impossible to accidentally log sensitive values in production because the code is removed.
- Simple configuration in next.config.js — no extra plugins or custom code.

## When to pick which

- Use the Next.js compiler removeConsole when you want absolute assurance that debug/log calls never reach production
- Use a runtime-guarded logger when you require strict code consistency across environments, or want the ability to control debug logic dynamically.

## Closing Tag

Both approaches are valid for different scenarios, and each developer will have their own preference. If your priority is safety and zero risk of leaking debug output, strip logs at build time with the Next.js compiler. If you want flexibility and occasional production introspection, consider keeping the runtime guard and a central logger.

## References

- **Console API** — MDN Web Docs  
  https://developer.mozilla.org/en-US/docs/Web/API/Console

- **Next.js: Compiler** — removeConsole option
  https://nextjs.org/docs/advanced-features/compiler
]]></description><link>https://magill.dev/post/strip-debug-logs-at-build-time-with-nextjs</link><guid isPermaLink="false">https://magill.dev/post/strip-debug-logs-at-build-time-with-nextjs</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Build]]></category><category><![CDATA[Observability]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 08:03:21 GMT</pubDate></item><item><title><![CDATA[When to Use ES6 Sets Instead of Arrays in JavaScript]]></title><description><![CDATA[
If you are like me, you often reach for arrays out of habit. They’re flexible, familiar, and perfect for most everyday tasks like rendering UI, keeping things in order, or working with duplicates.

But sometimes we need to guarantee uniqueness as a requirement, or check values in a huge list quickly. That's where ES6 Sets come in. Let's consider some real-world examples of both Sets and Arrays, and demonstrate how to properly use them in your next project.

## Tracking Unique Events with Sets

Suppose you’re building a notification or error logging system that needs to track which unique error codes have occurred, so you don’t repeatedly alert users about the same issue.

```js
const uniqueErrorCodes = new Set();

function handleError(code) {
	if (!uniqueErrorCodes.has(code)) {
		uniqueErrorCodes.add(code);
		// Show notification or log error
		console.log(`New error: ${code}`);
	}
}
```

### Why use a Set here?

**Performance:** `Set.has()` offers lookups with [static complexity (O(1))](https://medium.com/analytics-vidhya/big-o-notation-time-complexity-in-javascript-f97f356de2c4), so checking for a value is much faster than `Array.includes()`, which has dynamic O(n) complexity—based on the size of the collection.

**Uniqueness:** Sets automatically enforce uniqueness, so you never have to worry about duplicate error codes.

**Scalability:** As your app grows and more error codes are tracked, Sets remain fast and efficient, while Arrays slow down with each additional check

### Limitations of Sets

While Sets offer unique advantages, arrays are still preferable in many scenarios:

- **Indexing & Ordering:** Arrays maintain the order of elements and allow direct access by index (e.g., `arr[2]`). Sets do not support index-based access.
- **Advanced Methods:** Arrays have methods like `map`, `filter`, `reduce`, and `sort` that are not available on Sets. If you need to transform or aggregate data, arrays are often more convenient.
- **Serialization & Compatibility:** Arrays can be easily serialized to JSON, while Sets require conversion first. Many libraries and APIs expect arrays, not Sets. Conversion adds brittle 'glue-code' to integrations.

## Displaying Form Validation Errors with Arrays

When building forms in React, it’s common to collect and display multiple validation errors to the user. The order of errors and their ability to be referenced by index (for accessibility or animation) make Arrays the more suitable option here :

```jsx
import React from 'react';

const errors = [
	'Email is required.',
	'Password must be at least 8 characters.',
	'Please accept the terms and conditions.',
];

function ErrorList() {
	return (
		<ul className='error-list'>
			{errors.map((error, index) => (
				<li key={index} className='error-item'>
					{error}
				</li>
			))}
		</ul>
	);
}

export default ErrorList;
```

### Why use Arrays here?

- The order of errors matters for user experience.

- Arrays allow easy mapping and indexing for React keys.

- Sets would lose duplicate errors and don’t guarantee order, which could confuse users.

- Arrays are ideal for rendering ordered UI lists, such as form validation errors, notifications, or steps in a process, where order and duplicates may matter.

## The Closing Tag

Sets are a valuable tool when you need to guarantee uniqueness or need fast lookups of very large lists. But the array remains the reigning champion of ordering, indexing, and manual manipulation. Reach for the right tool and you can produce code that’s both efficient and easy to work with.

---

### Related Links

- [MDN Web Docs: Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set)
- [MDN Web Docs: Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array)
- [JavaScript.info: Map and Set](https://javascript.info/map-set)
- [ES6 In Depth: Collections](https://hacks.mozilla.org/2015/06/es6-in-depth-collections/)
]]></description><link>https://magill.dev/post/when-to-use-es6-sets-instead-of-arrays-in-javascript</link><guid isPermaLink="false">https://magill.dev/post/when-to-use-es6-sets-instead-of-arrays-in-javascript</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[ES6]]></category><category><![CDATA[Sets]]></category><category><![CDATA[Arrays]]></category><dc:creator><![CDATA[Andrew Magill]]></dc:creator><pubDate>Wed, 21 Jan 1970 06:10:55 GMT</pubDate></item></channel></rss>