Site logo

A collection of things.
By Chris James Martin

Fixed Feedburner RSS

08 May 2025 — Truckee

Post by @[email protected]
View on Mastodon

šŸ‘‹ Hi Phil!

This blog has been around for a long time. From 2010 until ~2 months ago the RSS feed pointed to a feedburner proxy which pointed to an old atom feed which broke sometime in the last 15 years.

2 months ago I fixed the RSS feed and updated the link to remove feedburner, since it’s utility seems to’ve gone the way of Google Reader 🪦

This is all great for new subscribers like Phil (ā¤ļø), but I wonder if there are any other RSS-lovers out there with my old feedburner link in their readers?

If so, hello! As of today I’ve pointed the old feedburner proxy at the working feed.

Add AirPrint to any printer with a Raspberry Pi + CUPS

07 May 2025 — Truckee

Hello Chris, This is God
If ChatGPT could print.

I have an old Brother HL-2170W laser printer. It’s bomb-proof, and doesn’t have any weird toner subscriptions, DRM, or whatever else they’re doing to printers these days.

However, it does not support modern protocols like AirPrint, and while it technically has WiFi, the firmware hasn’t been updated in a decade(?) and only supports WEP, so that’s not going to work.

Right now, it’s connected via Ethernet to the AP in my living room, and we just print to it from whichever computer feels like working with it directly that day. But I’d really rather not have a printer in my living room.

Recently, I rediscovered a few Raspberry Pi 1s and realized they’d be perfect to modernize this otherwise excellent printer and turn it into an AirPrint-compatible print server.

Supplies

  • Raspberry Pi
    Any Raspberry Pi, I used a Pi 1 for this project but a Pi Zero would work great.
  • If using a Pi 1 or 2: USB Wi-Fi adapter
    I already had this one, but any basic adapter should work.
  • MicroSD card
    Needs to be >2GB, I think. I used an 8GB I had lying around.
  • USB cable to connect the printer
    This was the one thing I didn’t have! I ordered 3 for ~$7, and now have 2 for future projects that need old USB cables.
  • Power supply for the Pi
    Should be able to use any usb brick + microusb cable, but I have had issues with some not supplying enough current. If you have issues with the Pi looping on boot, check the power supply.

1. Install Raspberry Pi OS (Lite)

Use the Raspberry Pi Imager to install the latest Raspberry Pi OS Lite (click through ā€œRaspberry Pi OS (other)ā€ to get to the lite version) onto your SD card. There’s no need for a full desktop environment for this job.

Raspberry Pi OS Lite selection in Raspberry Pi Imager
Raspberry Pi OS (other) -> Raspberry Pi OS Lite

Before flashing, preconfigure some settings:

  • Hostname
  • User + Password
  • Wi-Fi credentials
  • SSH public key
Raspberry Pi OS settings in Raspberry Pi Imager
Hostname + Username + WiFi Settings
Raspberry Pi OS SSH Public Key in Raspberry Pi Imager
SSH Public Key

Setting the SSH public key is optional, but really nice to do ahead of time. It looks like you can just click that ā€œRUN SSH-KEYGENā€ button and it’ll probably do everythig for you. Otherwise grab your public key out of ~/.ssh (if you already have one).

2. Boot and SSH into the Pi

  • Plug the Pi into power and wait for it to boot and connect to the network.
  • You can find the Pi’s IP address from your router admin, or try using the hostname you configured:
ssh <your-user>@<hostname>.local

3. Update and Install CUPS

sudo apt update && sudo apt upgrade
sudo apt install cups
sudo apt install printer-driver-brlaser

The printer-driver-brlaser package installs a CUPS-compatible driver for many Brother printers, including the HL-2170.

This process should work for any printer but I only installed the brlaser driver because it’s all I needed. The foomatic-db package contains drivers for many different printers, but I recommend searching for your specific printer and installing only what you need.

4. Configure CUPS

Add your user to the lpadmin group

Add the user that you configured in the Raspberry Pi imager to the lpadmin group so that you can use it to manage printers:

sudo usermod -aG lpadmin <your-user>

Listen and allow access to CUPS web interface on local network

By default, CUPS only listens on localhost. To allow access from other devices on your network:

sudo nano /etc/cups/cupsd.conf

Make these changes:

Listen on local network

Find:

# Only listen for connections from the local machine.
Listen localhost:631

Change to:

# Listen for connections on the local network.
Listen <hostname>.local:631

Allow local network access to admin web interface

Update the following sections:

# Restrict access to the server...
<Location />
  Order allow,deny
  Allow @local
</Location>

# Restrict access to the admin pages...
<Location /admin>
  Order allow,deny
  Allow @local
</Location>

# Restrict access to configuration files...
<Location /admin/conf>
  AuthType Default
  Require user @SYSTEM
  Order allow,deny
  Allow @local
</Location>

Save and exit, then restart CUPS:

sudo systemctl restart cups

5. Access CUPS Web Interface

From another device on the same network, go to:

https://<hostname>.local:631

You will probably get an SSL nastygram from your browser. Tell it you want to go to the website anyway.

  • From the Administration page, click ā€œAdd Printerā€.
  • Log in with the system username and password you set up in the Raspberry Pi Imager.

In the Add Printer flow, select these options on each page:

  1. Select the USB-connected Brother printer. It should be listed as a ā€œLocal Printerā€.
    My printer also showed up (twice) under discovered network printers. In theory you could set this up with the printer connected via. ethernet to your network and get AirPrint support without connecting to the Raspberry Pi via. USB, but that’s not my goal.
  2. Make sure the name/description/location look good, and select ā€œShare This Printerā€.
  3. Choose the closest printer to your specific model, for my HL-2170W I picked HL-2140. Unless you installed other drivers it will use brlaser no matter which model you select.
  4. After clicking ā€œAdd Printerā€ on the final screen, you should see ā€œPrinter has been added successfully."
Driver selection screen in CUPS web admin interface
Model selection, these will all use the brlaser driver.

Continue to ā€œSet Printer Optionsā€ and make any appropriate changes. I only needed to set the default paper size from A4 to Letter.

You should now be able to go to the ā€œPrintersā€ section in the CUPS web interface, select your newly added printer, and print a test page! šŸŽ‰

CUPS test page
You can have any color you'd like, as long as it's a shade of grey.

6. Fix CUPS on reboot

On my setup, CUPS starts but the web interface and printers are not accessable after reboot — likely because it starts before the network is ready. This means the printers and the web interface aren’t available until the service is restarted.

Restart CUPS when the network comes up

Create a script at /etc/network/if-up.d/cups:

sudo nano /etc/network/if-up.d/cups

Paste this:

#!/bin/sh
sudo systemctl restart cups.service

Then make it executable:

sudo chmod +x /etc/network/if-up.d/cups

This ensures that CUPS is restarted once the network is available.

That’s it!

You should now be able to print via. AirPrint or network printing from devices on your local network.

  • āœ… The printer shows up via AirPrint on iPhones and iPads
  • āœ… It appears as a network printer on MacOS
  • āœ… I can move my printer out of my living room
  • ☐ It should also work on Windows/Android/etc, but I haven’t tried…

Reference: This archived post gives a pretty solid walkthrough of similar steps to setting up a Raspberry Pi print server, but I don’t need the full foomatic-db as I’m only interested in setting up my brother laser (for now).

Weeknotes: Spring Break, Science Projects, Raspberry Pi Fun, and Trains!

21 Apr 2025 — Truckee, CA

Last week was spring break for our local school district, and Lauren and I both took breaks from work to have a family-focused week. It was a low-stress, low-pressure, great time.

We often put a lot of pressure on ourselves to ā€œmake the mostā€ of these school/work breaks, but we’ve learned over the years that trying to cram too much (too many activities, but also too much expectation) into a week off can be… too much. This year, with a bit of initial discomfort, we went into the week with minimal plans and expectations.

Ski Day

Monday we headed out to Alpine Meadows for a family ski day. We don’t get the opportunity often to all ski together, and after shaking out some initial kid-grumpies related to our differing ability levels, we found a vibe where everyone had fun.

The big kids wound up one-skiing the whole mountain to level the playing field and introduce some hilarity.

Family Ski!

One-skiing from the summit chair: Lucas and Isaac had slightly different approaches šŸ˜‚

When two skis are too easy.

The ski season is quickly winding down, so this will probably be one of the last days we all get out together. It was a good one.

Discovery Museum Reno

Tuesday we went down to the Discovery Museum in Reno with some friends. The kids happily spent hours exploring the various exhibits; all ages were engaged, including the adults.

Surprisingly, I only have one photo from the day: this impressive certificate I earned for solving a bunch of puzzles in one of the exhibits.

Mindbender Society

Science Fair Project

Brilliant move by Lucas’ school to schedule their science fair project the week after spring break… at least for kids like Lucas who will take advantage of the time to get work done.

I was able to help him build a ā€œfluid conductivity testerā€ using a multimeter as an ammeter, and a USB controller providing a very low amperage (µA) circuit for testing.

Science Fair Project Time!

Testing Conductivity of Liquids

We need to come up with more projects to build (and include the other kids). It was really fun working together.

Fun With Raspberry Pi(s)

While Lucas worked away on his project, I dug through my electronics toolbox and did some inventory. Turns out I had 4 old Raspberry Pi boards (and a bunch of Arduinos) buried in there just begging to be used for something.

Raspberry Pis

I set up one of the version 1 boards with Pi-hole for DNS-powered ad blocking and Cloudflared to play with Cloudflare tunnels. Pretty fun little side project!

I’m not sure what I’ll do with the version 3 with the screen, but it’s a good excuse to 3D print an enclosure! So that’ll be happening soon.

Trains

We wrapped the week with another museum trip down to the California Railroad Museum in Sacramento.

Sacramento Northern

The railroad museum is excellent. We visited many years ago, taking the train up from SF and spending the night in Old Sacramento, and we recreated the experience for the first time for all but the oldest kids. We explored the museum on Friday, spent the night at a hotel just down the street, and enjoyed a ride along the river on some of their vintage carriages on Saturday morning.

Train Ride Along the River

This is an excellent little overnight excursion for families. Next time we’ll take the California Zephyr down from Truckee for even more train time. šŸšžšŸš‰šŸš‚

A Good Week

We wrapped the week with Easter festivities (way too much candy) and a lovely, last-minute dinner celebration with friends. It’s the time of the year that always feels like we come out of hibernation, and all of our activities suddenly flip over to summer mode. I’m glad we stayed close to home to enjoy it.

'Slopsquatting' on Hallucinated Package Names

12 Apr 2025 — Truckee, CA

From the department of šŸ¤¦ā€ā™‚ļø

Apparently LLMs don’t just hallucinate package names (and include unnecessary real packages), but they hallucinate the same non-existent package names enough that it’s possible bad actors could register malicious packages under the made up names.

As noted by security firm Socket recently, the academic researchers who explored the subject last year found that re-running the same hallucination-triggering prompt ten times resulted in 43 percent of hallucinated packages being repeated every time and 39 percent never reappearing.

This isn’t a new concept. ā€œTyposquattingā€ has long been an issue, registering frequently typo’d domain names for scam sites and phishing attempts.

And I guess it’s already happening in the wild…

ā€œWhen we investigate, we sometimes find realistic looking READMEs, fake GitHub repos, even sketchy blogs that make the package seem authentic,ā€ he said, adding that Socket’s security scans will catch these packages because they analyze the way the code works.ā€

This is the icing on the cake:

ā€œEven worse, when you Google one of these slop-squatted package names, you’ll often get an AI-generated summary from Google itself confidently praising the package, saying it’s useful, stable, well-maintained. But it’s just parroting the package’s own README, no skepticism, no context. To a developer in a rush, it gives a false sense of legitimacy.

ā€œWhat a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful.ā€

Just this morning I was using Claude to prototype an AI Agent that uses MCP, and it generated a script with these super useful sounding packages:

# MCP and agent libraries
from mcp.agent import Agent
from mcp.tools import ToolRegistry
from mcp.tools.rss import RssFeedTool
from mcp.tools.github import GitHubTool
from mcp.memory import MemoryStore
from mcp.scheduler import Scheduler
from mcp.llm import LLMClient

As an experienced developer I was skeptical that such a convenient collection of tools existed for such a new standard, so I followed up:

This looks great, can you point me to documentation for the mcp libraries?

And got the response:

I should clarify something important: the MCP (Model Context Protocol) libraries used in my example (mcp.agent, mcp.tools, etc.) are conceptual and don't currently exist as published packages in this exact form. I created a hypothetical implementation based on what an ideal MCP framework might look like.

Thanks for the clarification, Claude.

So, how would one profit off of malicious mcp libraries? Asking for a friend.

Resurrecting launchcalendar.org

11 Apr 2025 — Truckee, CA

I’m reviving an old side project called launchcalendar.org, which I began in 2016. I started it when I realized I had no idea how many rockets were being launched weekly (way more than I thought!).

I was learning about them after the fact via images posted to Flickr, but I wanted a calendar feed to subscribe to that would let me track the launch times with links to a live stream, payload information, and launch location. Basically calendar invites to watch rockets take off.

At the time, I created a rough prototype using Jekyll + GitHub Pages. Each post would be a launch, and an iCal calendar file, which could be subscribed to in Google Calendar or Apple Calendar, was generated from the posts. However, it didn’t solve my problem because I still needed to track and enter all the launches. Launches are often delayed, and those updates needed to be reflected, so I was manually entering all the data.

While the system worked and would have been great for subscribers, it didn’t solve my use case. I still had to do all the work to track and enter the launches. I ended up taking on some contract work, and the project fell off my plate.

I still think it’s a good idea, and the scope is small enough to be a good project to explore interests without requiring a whole lot of ā€œother stuff.ā€ Since it was built with Jekyll and hosted on GitHub Pages (no server logic, no DB), everything was still functional. I re-registered the domain, pointed the DNS at the GitHub Pages repo, and reactivated my Mapbox account because I was using Mapbox maps on the individual launch pages, which I never finished.

With those two things done, the project was up and running just as I left it in 2016: launchcalendar.org.

Ugly list of launch schedule! - Still ugly, but back up and running.
Ugly list of launch schedule! - Still ugly, but back up and running.

There’s a solid foundation for a working system. With newer technologies and workflows, I believe I can make this project better. My immediate plan is to clean up the website and finalize design ideas for each launch page. Finish it to the point I wanted it to be in 2016, which still doesn’t solve the issue of data input.

Data Entry Plans

To get data into the system, I plan to set up a process where humans (probably me) can enter and update launch data on the website. Updates will generate pull requests, which I can review and approve. Once approved, they’ll be merged, and the data will update automatically.

AI Agent

There are a number of sites that publish great data about launches and space news in general. I did not and do not want to scrape anything to programmatically pull data. However, I think if an AI agent could search for information and tell me about it, that would solve much of the problem!

I plan to build an AI agent to search for launch data and generate draft entries and updates about launches. The agent will search for upcoming launches, live stream links, photos, updates/delays, etc., and open pull requests like a human would. Then I can review the PRs, ensure all the info is correct, make extra sure everything is attributed correctly with links to sources, and then publish the updates. I wouldn’t trust an AI agent to always be correct and automate the whole process, but if it can find relevant information and file pull requests for me to review, I think this thing could work.

Other Improvements

I’d like to add Bluesky posts in addition to the iCal calendar feed, so people can follow on Bluesky if they’d prefer. I briefly started thinking about rebuilding this whole thing on AT Protocol (I really want to build something on AT Protocol), but the Jekyll setup is so beautifully simple that I think it’s perfect for this project.

Follow Along

I’ll post as I make progress here, but you can also subscribe in your calendar app, or Google calendar if you want to see new launches as they’re added.

And of course you can follow the whole project on GitHub.

Tools: Jekyll Tools

09 Apr 2025 — Truckee, CA

My to-do list for this morning says ā€œWork on Taxes.ā€ Instead, I’m writing tools to make updates here.

A few things had been bugging me:

  • Posts were sorted by date but not time, so posts on the same day were sorted alphabetically instead of chronologically.
  • My permalinks were overly simplistic. All posts used the format /journal/title-slug. While unique titles avoided namespace issues, as I added categories and date-based index pages, I wanted the URLs to reflect some of that data.
  • Jekyll’s method of including categories in permalinks adds all categories like /category1/category2. I would like to only show a ā€œmainā€ category in the url.

I solved these issues with a combination of plugins to handle things dynamically at build time and scripts to bake some data into the posts.

Sorting

Jekyll sorts posts by date from the filename. If you have a properly formatted datetime value in your posts’ front matter, you can sort by date + time. I don’t. I use a simple time value in 12-hour format because I’m lazy and don’t want to think about date formats.

I solved this with a custom plugin to calculate an ISO 8601 datetime value at build time from the filename date and post time.

# _plugins/add_datetime_field.rb
require 'time'

Jekyll::Hooks.register :site, :post_read do |site|
  site.posts.docs.each do |post|
    if post.data['time']
      post_date = post.date.strftime('%Y-%m-%d')
      post_time = post.data['time']

      begin
        time_obj = Time.parse(post_time)
        combined = Time.parse("#{post_date} #{time_obj.strftime('%H:%M')}")
        post.data['datetime'] = combined.iso8601
      rescue ArgumentError => e
        Jekyll.logger.warn "Datetime Plugin:", "Could not parse time '#{post_time}' in #{post.path}: #{e.message}"
      end
    end
  end
end

With the datetime value added, sorting posts by date + time is as easy as:

{% assign sorted_posts = site.posts | sort: "datetime" | reverse %}
{% for post in sorted_posts limit:10 %}
  <div class="post">
    <h1 class="post-title"><a href="{{ post.url }}">{{ post.title }}</a></h1>
    <p class="meta">{{ post.date | date_to_string }}{% if post.place %} &#8212; {{ post.place }}{% endif %}</p>
    <div class="post-content text">
      {{ post.content }}
    </div>
  </div>
{% endfor %}

For paginated pages, I used the jekyll-pagination-v2 plugin, which supports sorting.

The v2 plugin is drop-in compatible with jekyll-pagination, all I had to do was update my _config.yml from:

paginate: 10
paginate_path: /journal/page:num/

To ↓

pagination:
  enabled: true
  per_page: 10
  permalink: /page:num/
  sort_field: datetime
  sort_reverse: true

Now posts are sorted by date + time on the main index and paginated /journal pages. Yay!

Before updating permalinks, I ensured existing links would still work. The jekyll-redirect-plugin maps old permalinks to new ones.

Redirects are performed by serving an HTML file with an HTTP-REFRESH meta tag pointing to your destination. No .htaccess file, nginx conf, xml file, or anything else is generated. It simply creates HTML files.

Not as great as a proper 301 or 302, but fine for my needs.

After installing the plugin, I added this to _config.yml to avoid generating a redirects.json that I don’t need:

redirect_from:
  json: false

Since all existing posts needed redirects, I used a Python script to write the old permalink into each post’s front matter.

See add_redirects.py and its README section.

After running ↑ that script ↑ every existing post had an entry in it’s front matter like:

redirect_from:
- /journal/voice-memo-manager/

To handle requests to the old url.

Once redirects were set up, I updated the default post permalink in _config.yaml:

defaults:
  - scope:
      path: ""
      type: "posts"
    values:
      permalink: /journal/:year/:month/:slug/

Now posts have permalinks with year and month, which will be useful for future index pages. Old /journal/:title links redirect nicely. Huzzah.

Jekyll supports :categories in permalinks, but I dislike how it handles posts in multiple categories. For example, this post is in both tools and field-notes. It will appear on /tools and /field-notes category pages.

Using /:categories/:year/:month/:slug/ would result in /tools/field-notes/2025/04/jekyll-tools. I dislike this because /tools/field-notes will never be a valid category URL.

I prefer specifying a link_category in the post front matter to specify the primary category in the permalink.

categories:
  - tools
  - field-notes
link_category: tools

So the post permalink becomes /tools/2025/04/jekyll-tools.

Jekyll doesn’t support custom front matter variables in permalinks, so I created a plugin to set a permalink for posts with link_category or categories.

  • If link_category is set, it is used as the primary category in the permalink.
  • If link_category is not set but the post has categories, the first category is used.
# _plugins/add_custom_permalink.rb
require 'time'

Jekyll::Hooks.register :site, :post_read do |site|
  site.posts.docs.each do |post|
    # Determine the link_category
    link_category = post.data['link_category']
    if !link_category && post.data['categories'] && post.data['categories'].any?
      link_category = post.data['categories'].first
    end

    # Skip this post if no link_category or categories are available
    next unless link_category

    # Extract year and month from the post date
    year = post.date.strftime('%Y')
    month = post.date.strftime('%m')
    slug = post.data['slug'] || post.data['title'].downcase.strip.gsub(" ", "-").gsub(/[^\w-]/, "")

    # Generate the custom permalink
    custom_permalink = "/#{link_category}/#{year}/#{month}/#{slug}/"

    # Set the custom permalink
    post.data['permalink'] = custom_permalink
  end
end

Now posts with categories have permalinks including one primary category instead of the default /journal. Hooray!

I also added a script to set link_category for all posts with existing categories. See set_link_category.py and its README section.

This script scans posts and sets the first category as link_category in their front matter. While the plugin fallback handles this, the script ensures permalinks won’t break if I add categories to old posts.

Now I have sorted posts with permalinks that will work well with future index pages. Not bad for a morning of not getting my taxes done!

Tools: Voice Memo Manager

08 Apr 2025 — Truckee, CA

I spend a lot of time in the car driving my kids to and from school and (many, many) other activities. This means that roughly half of my driving time is occupied with conversation and kid-centric audio, but the other half is great for thinking, listening to podcasts, and turning over thoughts and ideas in my head.

The non-kid car time generally adds up to at least an hour a day. Unfortunately much of the thoughts and ideas I might have during this time are lost or forgotten because I don’t have a good system for documenting them in the moment.

Matt Webb’s recent post about using Whisper Memos to transcribe his verbal outline of a talk (that he recorded while on a run), and then run that transcript through Claude to produce a high-level outline got me thinking about how I could do something similar to capture my thoughts while driving around.

Process Development

Problem

I immediately ran into an issue. Much of the time my thinking in the car is happening while I’m listening to something. Often the thoughts and ideas are directly derived or inspired by content that I’m consuming in an auditory format.

This is not super compatible with Whisper Memos, which seems designed to take a long(er) form recording and transcribe it into legible paragraphs (very cool), then send it in am email. I tried recording a new memo each time I wanted to note something, but that quickly led to a bunch of short clips, each generating a transcript and it’s own email. This seems messy, and just not the use case Whisper Memos is designed for.

Solution

iOS already has a voice memos app, which conveniently syncs all of your notes to the corresponding MacOS version. Bonus, I can use Siri to record a new voice memo; this seems like it might have potential!

But, The macOS Voice Memos app is… not great.

It’s nice that it automatically syncs recordings across your iPhone, Apple Watch, and Mac. But as of MacOS 14 (Sonoma), which I’m currently running, it still doesn’t support transcriptions. There’s no way to generate a transcript from a voice memo—let alone export one. You can’t even multi-select memos in the sidebar to drag them into another app. The actual audio files are buried somewhere in the file system.

This felt like a perfect use case for a bit of AI-assisted ā€œvibe coding.ā€

I asked ChatGPT ā€œIs there a way to access recordings and transcripts from the iOS Voice Memos app on MacOS?ā€ (it was correct that you can, but then incorrect about where the files are stored), then continued with a very long chat that eventually led to a full-on, web-based local application. It lets me access and work with voice memos synced from my mobile devices to my Mac.

The process was surprisingly fun. ChatGPT needed a lot of help along the way, but it is entirely possible to very quickly write useful tools with ChatGPT as an assistant that doesn’t mind if you ask it to tedious things over and over again. The result was a genuinely useful (if not particularly beautiful) tool.

Voice Memos Manager

I won’t get into the specific details of the app’s functionality here, but if you’re looking for something like this you can see the code and very detailed (thanks ChatGPT) Readme on GitHub.

If you’re more curious about the process of building with ChatGPT, you can see the full chat on ChatGPT.

Does it work?

Yes! I had been listening to an interview with Paul Frazee about Bluesky and the AT Protocol, and over the few days of listening I recorded ~20 individual voice memos of my takeaways from the conversation. I was able to quickly organize, transcribe, and export those notes as one big chunk of text; then work with it to create a cleaned up summary of what I learned. I retain things much better if I write them down, and I never would have gotten it done without those transcribed voice memos.

I’ve started recording all kinds of thoughts that otherwise would have been forgotten in the chaos of my daily life. This little tool that I never would have built without ā€œvibe codingā€ makes those recordings useful to me.

ā€œOn the one hand, we have difficult things become easy; on the other hand, we have easy things become absolutely trivialā€ - indeed.

Listening: Prefetcher on Building PinkSea on the AT Protocol

07 Apr 2025 — Truckee, CA

ā€œATProto is a massive network, and at least for me, when I saw the initial graph, I was just very confused. I absolutely did not know what I was looking at. But let’s start with the base building block… the PDS.ā€

I was looking for more info about AT Protocol from an independent developer perspective, and found this Software Sessions podcast episode featuring Prefetcher, where he discusses building PinkSea on the AT Protocol.

Around half way through the episide the conversation gets into the technical aspects of his development process, particularly around the AT Protocol’s infrastructure, including Personal Data Servers (PDS), relays, app views, the PLC directory, and DIDs. I enjoyed the whole thing, but if you want to jump straight to the AT Protocol technical info, it starts around 32 minutes in.

This is the first Software Sessions podcast I’ve listened to, and I appreciated the podcast’s structured format, which allows for easy navigation between sections, even on CarPlay. If I ever make a podcast again I’ll have to include chapter metadata. It’s very well done.

Reading List: "My Airships" by Alberto Santos-Dumont

07 Apr 2025 — Truckee, CA

Came across this great Mastodon thread about Alberto Santos-Dumont, someone I had previously never heard of. Sounds like a fascinating and brilliant individual.

I can’t find the book on Libby, but here it is on Amazon. I’m going to see if my local library can find a copy.

There’s another book, Wings of Madness: Alberto Santos-Dumont and the Invention of Flight by Paul Hoffman that’ll probably be going on my reading list after this one.

I recommend clicking through to the full thread.

Post by @[email protected]
View on Mastodon

Reading List: "Rainbows End" by Vernor Vinge

04 Apr 2025 — Truckee, CA

In the Q&A of Blaine’s ATmosphereConf talk someone made a book recommendation of Rainbows End by Vernor Vinge.

I grabbed it on Libby.

Here’s the description by the off-camera person who made the recommendation:

I have a science fiction book recommendation, and we’ve already been handing some out — Rainbows End by Vernor Vinge. Who’s read that? A few handful of people? It was written in 2006 and is set in a future world of ubiquitous computing. Every device has a secure enclave.

Apparently, we get a world government in the future — which is awesome — but it also has back doors into all the secure enclaves, which is… awkward.

A few hacker types have access to hardware in Quito and Paraguay that includes the secure enclave without the back door. Tessa has hinted, and Blaine has hinted — these are things we also need to talk about. It’s something this community should continue to think about as well. It’s part of the extended work, but it’s a lot.

Right now, in Canada, I can’t send a packet from Vancouver to Toronto — never mind the Atlantic provinces — without routing through U.S. networks.

I think we’ll just all sit with these thoughts.

I’m not sure how I want to include books here, so I’m just going to start including them. Not sure what I want to do when a book moves from ā€œto readā€ to ā€œreadingā€ to ā€œreadā€. I’ll probably just update this post? If I do that I’ll probably filter them out of the main feed.