Site logo

A collection of things.
By Chris James Martin

Weeknotes: Spring Break, Science Projects, Raspberry Pi Fun, and Trains!

21 Apr 2025 — Truckee, CA

Last week was spring break for our local school district, and Lauren and I both took breaks from work to have a family-focused week. It was a low-stress, low-pressure, great time.

We often put a lot of pressure on ourselves to ā€œmake the mostā€ of these school/work breaks, but we’ve learned over the years that trying to cram too much (too many activities, but also too much expectation) into a week off can be… too much. This year, with a bit of initial discomfort, we went into the week with minimal plans and expectations.

Ski Day

Monday we headed out to Alpine Meadows for a family ski day. We don’t get the opportunity often to all ski together, and after shaking out some initial kid-grumpies related to our differing ability levels, we found a vibe where everyone had fun.

The big kids wound up one-skiing the whole mountain to level the playing field and introduce some hilarity.

Family Ski!

One-skiing from the summit chair: Lucas and Isaac had slightly different approaches šŸ˜‚

When two skis are too easy.

The ski season is quickly winding down, so this will probably be one of the last days we all get out together. It was a good one.

Discovery Museum Reno

Tuesday we went down to the Discovery Museum in Reno with some friends. The kids happily spent hours exploring the various exhibits; all ages were engaged, including the adults.

Surprisingly, I only have one photo from the day: this impressive certificate I earned for solving a bunch of puzzles in one of the exhibits.

Mindbender Society

Science Fair Project

Brilliant move by Lucas’ school to schedule their science fair project the week after spring break… at least for kids like Lucas who will take advantage of the time to get work done.

I was able to help him build a ā€œfluid conductivity testerā€ using a multimeter as an ammeter, and a USB controller providing a very low amperage (µA) circuit for testing.

Science Fair Project Time!

Testing Conductivity of Liquids

We need to come up with more projects to build (and include the other kids). It was really fun working together.

Fun With Raspberry Pi(s)

While Lucas worked away on his project, I dug through my electronics toolbox and did some inventory. Turns out I had 4 old Raspberry Pi boards (and a bunch of Arduinos) buried in there just begging to be used for something.

Raspberry Pis

I set up one of the version 1 boards with Pi-hole for DNS-powered ad blocking and Cloudflared to play with Cloudflare tunnels. Pretty fun little side project!

I’m not sure what I’ll do with the version 3 with the screen, but it’s a good excuse to 3D print an enclosure! So that’ll be happening soon.

Trains

We wrapped the week with another museum trip down to the California Railroad Museum in Sacramento.

Sacramento Northern

The railroad museum is excellent. We visited many years ago, taking the train up from SF and spending the night in Old Sacramento, and we recreated the experience for the first time for all but the oldest kids. We explored the museum on Friday, spent the night at a hotel just down the street, and enjoyed a ride along the river on some of their vintage carriages on Saturday morning.

Train Ride Along the River

This is an excellent little overnight excursion for families. Next time we’ll take the California Zephyr down from Truckee for even more train time. šŸšžšŸš‰šŸš‚

A Good Week

We wrapped the week with Easter festivities (way too much candy) and a lovely, last-minute dinner celebration with friends. It’s the time of the year that always feels like we come out of hibernation, and all of our activities suddenly flip over to summer mode. I’m glad we stayed close to home to enjoy it.

'Slopsquatting' on Hallucinated Package Names

12 Apr 2025 — Truckee, CA

From the department of šŸ¤¦ā€ā™‚ļø

Apparently LLMs don’t just hallucinate package names (and include unnecessary real packages), but they hallucinate the same non-existent package names enough that it’s possible bad actors could register malicious packages under the made up names.

As noted by security firm Socket recently, the academic researchers who explored the subject last year found that re-running the same hallucination-triggering prompt ten times resulted in 43 percent of hallucinated packages being repeated every time and 39 percent never reappearing.

This isn’t a new concept. ā€œTyposquattingā€ has long been an issue, registering frequently typo’d domain names for scam sites and phishing attempts.

And I guess it’s already happening in the wild…

ā€œWhen we investigate, we sometimes find realistic looking READMEs, fake GitHub repos, even sketchy blogs that make the package seem authentic,ā€ he said, adding that Socket’s security scans will catch these packages because they analyze the way the code works.ā€

This is the icing on the cake:

ā€œEven worse, when you Google one of these slop-squatted package names, you’ll often get an AI-generated summary from Google itself confidently praising the package, saying it’s useful, stable, well-maintained. But it’s just parroting the package’s own README, no skepticism, no context. To a developer in a rush, it gives a false sense of legitimacy.

ā€œWhat a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful.ā€

Just this morning I was using Claude to prototype an AI Agent that uses MCP, and it generated a script with these super useful sounding packages:

# MCP and agent libraries
from mcp.agent import Agent
from mcp.tools import ToolRegistry
from mcp.tools.rss import RssFeedTool
from mcp.tools.github import GitHubTool
from mcp.memory import MemoryStore
from mcp.scheduler import Scheduler
from mcp.llm import LLMClient

As an experienced developer I was skeptical that such a convenient collection of tools existed for such a new standard, so I followed up:

This looks great, can you point me to documentation for the mcp libraries?

And got the response:

I should clarify something important: the MCP (Model Context Protocol) libraries used in my example (mcp.agent, mcp.tools, etc.) are conceptual and don't currently exist as published packages in this exact form. I created a hypothetical implementation based on what an ideal MCP framework might look like.

Thanks for the clarification, Claude.

So, how would one profit off of malicious mcp libraries? Asking for a friend.

Resurrecting launchcalendar.org

11 Apr 2025 — Truckee, CA

I’m reviving an old side project called launchcalendar.org, which I began in 2016. I started it when I realized I had no idea how many rockets were being launched weekly (way more than I thought!).

I was learning about them after the fact via images posted to Flickr, but I wanted a calendar feed to subscribe to that would let me track the launch times with links to a live stream, payload information, and launch location. Basically calendar invites to watch rockets take off.

At the time, I created a rough prototype using Jekyll + GitHub Pages. Each post would be a launch, and an iCal calendar file, which could be subscribed to in Google Calendar or Apple Calendar, was generated from the posts. However, it didn’t solve my problem because I still needed to track and enter all the launches. Launches are often delayed, and those updates needed to be reflected, so I was manually entering all the data.

While the system worked and would have been great for subscribers, it didn’t solve my use case. I still had to do all the work to track and enter the launches. I ended up taking on some contract work, and the project fell off my plate.

I still think it’s a good idea, and the scope is small enough to be a good project to explore interests without requiring a whole lot of ā€œother stuff.ā€ Since it was built with Jekyll and hosted on GitHub Pages (no server logic, no DB), everything was still functional. I re-registered the domain, pointed the DNS at the GitHub Pages repo, and reactivated my Mapbox account because I was using Mapbox maps on the individual launch pages, which I never finished.

With those two things done, the project was up and running just as I left it in 2016: launchcalendar.org.

Ugly list of launch schedule! - Still ugly, but back up and running.
Ugly list of launch schedule! - Still ugly, but back up and running.

There’s a solid foundation for a working system. With newer technologies and workflows, I believe I can make this project better. My immediate plan is to clean up the website and finalize design ideas for each launch page. Finish it to the point I wanted it to be in 2016, which still doesn’t solve the issue of data input.

Data Entry Plans

To get data into the system, I plan to set up a process where humans (probably me) can enter and update launch data on the website. Updates will generate pull requests, which I can review and approve. Once approved, they’ll be merged, and the data will update automatically.

AI Agent

There are a number of sites that publish great data about launches and space news in general. I did not and do not want to scrape anything to programmatically pull data. However, I think if an AI agent could search for information and tell me about it, that would solve much of the problem!

I plan to build an AI agent to search for launch data and generate draft entries and updates about launches. The agent will search for upcoming launches, live stream links, photos, updates/delays, etc., and open pull requests like a human would. Then I can review the PRs, ensure all the info is correct, make extra sure everything is attributed correctly with links to sources, and then publish the updates. I wouldn’t trust an AI agent to always be correct and automate the whole process, but if it can find relevant information and file pull requests for me to review, I think this thing could work.

Other Improvements

I’d like to add Bluesky posts in addition to the iCal calendar feed, so people can follow on Bluesky if they’d prefer. I briefly started thinking about rebuilding this whole thing on AT Protocol (I really want to build something on AT Protocol), but the Jekyll setup is so beautifully simple that I think it’s perfect for this project.

Follow Along

I’ll post as I make progress here, but you can also subscribe in your calendar app, or Google calendar if you want to see new launches as they’re added.

And of course you can follow the whole project on GitHub.

Tools: Jekyll Tools

09 Apr 2025 — Truckee, CA

My to-do list for this morning says ā€œWork on Taxes.ā€ Instead, I’m writing tools to make updates here.

A few things had been bugging me:

  • Posts were sorted by date but not time, so posts on the same day were sorted alphabetically instead of chronologically.
  • My permalinks were overly simplistic. All posts used the format /journal/title-slug. While unique titles avoided namespace issues, as I added categories and date-based index pages, I wanted the URLs to reflect some of that data.
  • Jekyll’s method of including categories in permalinks adds all categories like /category1/category2. I would like to only show a ā€œmainā€ category in the url.

I solved these issues with a combination of plugins to handle things dynamically at build time and scripts to bake some data into the posts.

Sorting

Jekyll sorts posts by date from the filename. If you have a properly formatted datetime value in your posts’ front matter, you can sort by date + time. I don’t. I use a simple time value in 12-hour format because I’m lazy and don’t want to think about date formats.

I solved this with a custom plugin to calculate an ISO 8601 datetime value at build time from the filename date and post time.

# _plugins/add_datetime_field.rb
require 'time'

Jekyll::Hooks.register :site, :post_read do |site|
  site.posts.docs.each do |post|
    if post.data['time']
      post_date = post.date.strftime('%Y-%m-%d')
      post_time = post.data['time']

      begin
        time_obj = Time.parse(post_time)
        combined = Time.parse("#{post_date} #{time_obj.strftime('%H:%M')}")
        post.data['datetime'] = combined.iso8601
      rescue ArgumentError => e
        Jekyll.logger.warn "Datetime Plugin:", "Could not parse time '#{post_time}' in #{post.path}: #{e.message}"
      end
    end
  end
end

With the datetime value added, sorting posts by date + time is as easy as:

{% assign sorted_posts = site.posts | sort: "datetime" | reverse %}
{% for post in sorted_posts limit:10 %}
  <div class="post">
    <h1 class="post-title"><a href="{{ post.url }}">{{ post.title }}</a></h1>
    <p class="meta">{{ post.date | date_to_string }}{% if post.place %} &#8212; {{ post.place }}{% endif %}</p>
    <div class="post-content text">
      {{ post.content }}
    </div>
  </div>
{% endfor %}

For paginated pages, I used the jekyll-pagination-v2 plugin, which supports sorting.

The v2 plugin is drop-in compatible with jekyll-pagination, all I had to do was update my _config.yml from:

paginate: 10
paginate_path: /journal/page:num/

To ↓

pagination:
  enabled: true
  per_page: 10
  permalink: /page:num/
  sort_field: datetime
  sort_reverse: true

Now posts are sorted by date + time on the main index and paginated /journal pages. Yay!

Before updating permalinks, I ensured existing links would still work. The jekyll-redirect-plugin maps old permalinks to new ones.

Redirects are performed by serving an HTML file with an HTTP-REFRESH meta tag pointing to your destination. No .htaccess file, nginx conf, xml file, or anything else is generated. It simply creates HTML files.

Not as great as a proper 301 or 302, but fine for my needs.

After installing the plugin, I added this to _config.yml to avoid generating a redirects.json that I don’t need:

redirect_from:
  json: false

Since all existing posts needed redirects, I used a Python script to write the old permalink into each post’s front matter.

See add_redirects.py and its README section.

After running ↑ that script ↑ every existing post had an entry in it’s front matter like:

redirect_from:
- /journal/voice-memo-manager/

To handle requests to the old url.

Once redirects were set up, I updated the default post permalink in _config.yaml:

defaults:
  - scope:
      path: ""
      type: "posts"
    values:
      permalink: /journal/:year/:month/:slug/

Now posts have permalinks with year and month, which will be useful for future index pages. Old /journal/:title links redirect nicely. Huzzah.

Jekyll supports :categories in permalinks, but I dislike how it handles posts in multiple categories. For example, this post is in both tools and field-notes. It will appear on /tools and /field-notes category pages.

Using /:categories/:year/:month/:slug/ would result in /tools/field-notes/2025/04/jekyll-tools. I dislike this because /tools/field-notes will never be a valid category URL.

I prefer specifying a link_category in the post front matter to specify the primary category in the permalink.

categories:
  - tools
  - field-notes
link_category: tools

So the post permalink becomes /tools/2025/04/jekyll-tools.

Jekyll doesn’t support custom front matter variables in permalinks, so I created a plugin to set a permalink for posts with link_category or categories.

  • If link_category is set, it is used as the primary category in the permalink.
  • If link_category is not set but the post has categories, the first category is used.
# _plugins/add_custom_permalink.rb
require 'time'

Jekyll::Hooks.register :site, :post_read do |site|
  site.posts.docs.each do |post|
    # Determine the link_category
    link_category = post.data['link_category']
    if !link_category && post.data['categories'] && post.data['categories'].any?
      link_category = post.data['categories'].first
    end

    # Skip this post if no link_category or categories are available
    next unless link_category

    # Extract year and month from the post date
    year = post.date.strftime('%Y')
    month = post.date.strftime('%m')
    slug = post.data['slug'] || post.data['title'].downcase.strip.gsub(" ", "-").gsub(/[^\w-]/, "")

    # Generate the custom permalink
    custom_permalink = "/#{link_category}/#{year}/#{month}/#{slug}/"

    # Set the custom permalink
    post.data['permalink'] = custom_permalink
  end
end

Now posts with categories have permalinks including one primary category instead of the default /journal. Hooray!

I also added a script to set link_category for all posts with existing categories. See set_link_category.py and its README section.

This script scans posts and sets the first category as link_category in their front matter. While the plugin fallback handles this, the script ensures permalinks won’t break if I add categories to old posts.

Now I have sorted posts with permalinks that will work well with future index pages. Not bad for a morning of not getting my taxes done!

Tools: Voice Memo Manager

08 Apr 2025 — Truckee, CA

I spend a lot of time in the car driving my kids to and from school and (many, many) other activities. This means that roughly half of my driving time is occupied with conversation and kid-centric audio, but the other half is great for thinking, listening to podcasts, and turning over thoughts and ideas in my head.

The non-kid car time generally adds up to at least an hour a day. Unfortunately much of the thoughts and ideas I might have during this time are lost or forgotten because I don’t have a good system for documenting them in the moment.

Matt Webb’s recent post about using Whisper Memos to transcribe his verbal outline of a talk (that he recorded while on a run), and then run that transcript through Claude to produce a high-level outline got me thinking about how I could do something similar to capture my thoughts while driving around.

Process Development

Problem

I immediately ran into an issue. Much of the time my thinking in the car is happening while I’m listening to something. Often the thoughts and ideas are directly derived or inspired by content that I’m consuming in an auditory format.

This is not super compatible with Whisper Memos, which seems designed to take a long(er) form recording and transcribe it into legible paragraphs (very cool), then send it in am email. I tried recording a new memo each time I wanted to note something, but that quickly led to a bunch of short clips, each generating a transcript and it’s own email. This seems messy, and just not the use case Whisper Memos is designed for.

Solution

iOS already has a voice memos app, which conveniently syncs all of your notes to the corresponding MacOS version. Bonus, I can use Siri to record a new voice memo; this seems like it might have potential!

But, The macOS Voice Memos app is… not great.

It’s nice that it automatically syncs recordings across your iPhone, Apple Watch, and Mac. But as of MacOS 14 (Sonoma), which I’m currently running, it still doesn’t support transcriptions. There’s no way to generate a transcript from a voice memo—let alone export one. You can’t even multi-select memos in the sidebar to drag them into another app. The actual audio files are buried somewhere in the file system.

This felt like a perfect use case for a bit of AI-assisted ā€œvibe coding.ā€

I asked ChatGPT ā€œIs there a way to access recordings and transcripts from the iOS Voice Memos app on MacOS?ā€ (it was correct that you can, but then incorrect about where the files are stored), then continued with a very long chat that eventually led to a full-on, web-based local application. It lets me access and work with voice memos synced from my mobile devices to my Mac.

The process was surprisingly fun. ChatGPT needed a lot of help along the way, but it is entirely possible to very quickly write useful tools with ChatGPT as an assistant that doesn’t mind if you ask it to tedious things over and over again. The result was a genuinely useful (if not particularly beautiful) tool.

Voice Memos Manager

I won’t get into the specific details of the app’s functionality here, but if you’re looking for something like this you can see the code and very detailed (thanks ChatGPT) Readme on GitHub.

If you’re more curious about the process of building with ChatGPT, you can see the full chat on ChatGPT.

Does it work?

Yes! I had been listening to an interview with Paul Frazee about Bluesky and the AT Protocol, and over the few days of listening I recorded ~20 individual voice memos of my takeaways from the conversation. I was able to quickly organize, transcribe, and export those notes as one big chunk of text; then work with it to create a cleaned up summary of what I learned. I retain things much better if I write them down, and I never would have gotten it done without those transcribed voice memos.

I’ve started recording all kinds of thoughts that otherwise would have been forgotten in the chaos of my daily life. This little tool that I never would have built without ā€œvibe codingā€ makes those recordings useful to me.

ā€œOn the one hand, we have difficult things become easy; on the other hand, we have easy things become absolutely trivialā€ - indeed.

Listening: Prefetcher on Building PinkSea on the AT Protocol

07 Apr 2025 — Truckee, CA

ā€œATProto is a massive network, and at least for me, when I saw the initial graph, I was just very confused. I absolutely did not know what I was looking at. But let’s start with the base building block… the PDS.ā€

I was looking for more info about AT Protocol from an independent developer perspective, and found this Software Sessions podcast episode featuring Prefetcher, where he discusses building PinkSea on the AT Protocol.

Around half way through the episide the conversation gets into the technical aspects of his development process, particularly around the AT Protocol’s infrastructure, including Personal Data Servers (PDS), relays, app views, the PLC directory, and DIDs. I enjoyed the whole thing, but if you want to jump straight to the AT Protocol technical info, it starts around 32 minutes in.

This is the first Software Sessions podcast I’ve listened to, and I appreciated the podcast’s structured format, which allows for easy navigation between sections, even on CarPlay. If I ever make a podcast again I’ll have to include chapter metadata. It’s very well done.

Reading List: "My Airships" by Alberto Santos-Dumont

07 Apr 2025 — Truckee, CA

Came across this great Mastodon thread about Alberto Santos-Dumont, someone I had previously never heard of. Sounds like a fascinating and brilliant individual.

I can’t find the book on Libby, but here it is on Amazon. I’m going to see if my local library can find a copy.

There’s another book, Wings of Madness: Alberto Santos-Dumont and the Invention of Flight by Paul Hoffman that’ll probably be going on my reading list after this one.

I recommend clicking through to the full thread.

Post by @[email protected]
View on Mastodon

Reading List: "Rainbows End" by Vernor Vinge

04 Apr 2025 — Truckee, CA

In the Q&A of Blaine’s ATmosphereConf talk someone made a book recommendation of Rainbows End by Vernor Vinge.

I grabbed it on Libby.

Here’s the description by the off-camera person who made the recommendation:

I have a science fiction book recommendation, and we’ve already been handing some out — Rainbows End by Vernor Vinge. Who’s read that? A few handful of people? It was written in 2006 and is set in a future world of ubiquitous computing. Every device has a secure enclave.

Apparently, we get a world government in the future — which is awesome — but it also has back doors into all the secure enclaves, which is… awkward.

A few hacker types have access to hardware in Quito and Paraguay that includes the secure enclave without the back door. Tessa has hinted, and Blaine has hinted — these are things we also need to talk about. It’s something this community should continue to think about as well. It’s part of the extended work, but it’s a lot.

Right now, in Canada, I can’t send a packet from Vancouver to Toronto — never mind the Atlantic provinces — without routing through U.S. networks.

I think we’ll just all sit with these thoughts.

I’m not sure how I want to include books here, so I’m just going to start including them. Not sure what I want to do when a book moves from ā€œto readā€ to ā€œreadingā€ to ā€œreadā€. I’ll probably just update this post? If I do that I’ll probably filter them out of the main feed.

Listening: Paul Frazee on Bluesky and the AT Protocol

04 Apr 2025 — Truckee, CA

This Software Engineering Radio episode is one of the best overviews I’ve come across so far for how the AT Protocol (Authenticated Transfer Protocol) that Bluesky is built on actually works. It’s from January 2025 and features an in-depth conversation with Paul Frazee (@pfrazee.com), CTO of Bluesky.

I’d recommend it to anyone curious about decentralized social networks, especially if you’re wondering how Bluesky differs from protocols like ActivityPub (used by Mastodon).

One thing I really appreciate about this podcast is the format — it’s structured and well-moderated, with thoughtful, prepared questions that keep the conversation focused. It’s much more focused and clear than the more meandering tech talk formats out there.

Here are the notes I collected while listening to the podcast. I do this mainly to educate myself, so if I got anything wrong, yell at me on Bluesky.

Bluesky/AT Protocol Origin

Paul describes the origin of Bluesky as a Twitter-funded project to explore alternative architectures for social media. He describes three main categories of decentralized networking tech at the time:

  • Peer-to-peer (think BitTorrent or Secure Scuttlebutt),
  • Federation (Mastodon and ActivityPub), and
  • Blockchain-based systems.

Although blockchain is mentioned, it’s not a big part of the conversation. Paul’s experience comes primarily from the peer-to-peer world, including nearly a decade working on Secure Scuttlebutt. He gives a summary of what worked and what didn’t in that space — namely, the limitations around device syncing, key management, and especially scale. Nice quote: ā€œIt can’t be rocket science to do a comment sectionā€.

What Is the AT Protocol?

ā€œATā€ stands for Authenticated Transfer. It’s built around a few core ideas:

  • DIDs (Decentralized Identifiers), based on a W3C spec — these allow users to have portable identities not tied to any single server.
  • PDS (Personal Data Servers), where each user’s data lives.
  • A relay-and-aggregation system that pulls in updates from across the network to power app-level features like timelines, threads, and search.

This setup enables a decentralized (although still server based), yet scalable, architecture.

Frazee draws a comparison between ATProto and traditional web infrastructure: think of PDSs as websites, relays as search engine crawlers, and app views as search interfaces or timelines. They get into the architecture discussion around 11 minutes in.

Portability and DIDs

One of the big differentiators is account portability using DIDs (decentralized identifiers). DIDs provide stable, cryptographic identifiers — and they’re key to enabling server migration without breaking your social graph.

Paul explains this well around 31-minutes in: in Mastodon, if you move to a new server, your identity and history are fragmented. In ATProto, your DID doesn’t change — it just points to a new server. This eliminates the cascading breakage that happens with federated identifiers.

Domains, Handles, and Identity

At 36 minutes, Frazee talks about how handles work in ATProto. Your handle can be your own domain name, which adds an element of identity ownership. A fun example: Senator Ron Wyden uses @wyden.senate.gov as his handle — no blue check needed. It’s a trust signal in DNS itself.

Custom Feeds and Community Innovation

Around 39–41 minutes Paul describes how community members have been building tools to create custom feeds. The official Bluesky app doesn’t offer a built-in feed editor yet, but others have already made composable UIs for combining hashtags, user lists, and post types into curated feeds. He mentions the ā€œQuiet Postersā€ feed, which surfaces posts from people who don’t post often — a simple but clever way to surface quieter voices.

Moderation: Labels

Another significant topic is moderation — both content moderation and safety/legal compliance. Around 42 minutes, Paul explains how labels serve as a metadata layer that anyone (including independent moderation services) can publish and subscribe to. This allows client apps to let users choose their own filters — a more flexible model than top-down moderation?

Building on ATProto

Towards the end of the episode (around 46 minutes), Paul lists a few projects already using the protocol:

He also describes how building apps works in practice. You authenticate users via OAuth, then write to their PDS and listen to the relay’s event stream to update your UI. It’s a little different from a traditional app stack but not dramatically so — and in some ways, it simplifies the developer experience (says Paul, I haven’t built anything on ATProto yet).

Scale and Opensource

Paul mentions that Bluesky has scaled to over 11 million users (at that time, I believe they’re at 30M+ now) and 1.5–2 million daily actives — with no major architecture bottlenecks (1:05). That’s huge, especially for a new + decentralized protocol, but it sounds like they also have massive infrastructure and funding. I’m curious how much bootstrapping an app on ATProto would realistically cost, especially if it took off. I’m working my way through the talks from the recent ATmosphereConf now, and I hope there’s more insight from independent developers in there.

I was also surprised to learn that all the source code for BlueSky is open source and available on GitHub (1:07).

Final Thoughts

Paul is a clear communicator. He has extensive experience and I’m impressed with his ability to communicate an overview of both the AT protocol and Blue Sky in just over an hour.

After listening, I am significantly more interested in learning about the AT protocol. It’s great to feel excited about a protocol again, and feels similar to the bygone days of REST APIs and RSS being novel new things to play with.

One idea that I might throw some ā€œvibe codingā€ at: an RSS reader backed by ATProto, where you can follow what your Bluesky contacts are reading or listening to, as well as aggregate links shared by friends (and find new feeds) — kind of like the Breaker podcast app (RIP). Absolutely not something the world is crying out for, but it would be fun.

Things I Think are Great: "Context Window" by Matt Webb

28 Mar 2025 — Alpine Meadows, CA

Difficult things becoming easy is not the story here, because on the one hand, we have difficult things become easy; on the other hand, we have easy things become absolutely trivial — and that, for me, is the interesting part.

I’m a few weeks late sharing this. I’ve been sitting on it and spending too much time thinking about how I want to share links here. I’m still thinking about it…

I really enjoyed this talk, particularly the positive and fun perspective on LLMs and AI. I understand much of the negativity around AI, especially related to funding, business models, etc. — but there are also so many opportunities to build fun and useful things. I think Matt does a great job of shining a light on the positive side of the current AI tools.