A collection of things.
By Chris James Martin

SF2000 Updates and Improvements

13 Feb 2025 — Truckee, CA

Today was a snow day, that means my kids got much more video game time than usual, and in our house video game time means “retro” games powered by the amazing ~$20 SF2000 (Super Mario Brothers 3 is still the pinnicle of game design, and I will die on this hill).

The SF2000 is a marvel, and being able to hook it up to the TV and play multiplayer NES and SNES games with my kids makes me so happy. But the screen tearing and other miscellaneous issues that it comes with out of the box motivated me to do some research and updating today.

I’m sharing what I did here, because there is a lot of information to be found about the SF2000, enough that it can be overwhelming if you’re just looking to make it better without watching a bunch of youtube videos and reading way more than you need to get the job done. I’ve now done both of those for you, and can share the basics.

Goals:

  • ✅ Update/fix the bootloader.
  • ✅ Install Multicore firmware, as it has the fixes and improvements we’re looking for and it’s based on the official firmware.
  • ✅ Fix screen tearing (seems that multicore has software fixes).
  • Change default games for each system, make all the good Marios the defaults, maybe Sonic, Zelda, try to remember the best games for each system. I didn’t get this one done, but multicore has slightly better defaults than the stock firmware, so it’s not urgent anymore.

This guy has written many, many more words about the SF2000 than I ever will. If you want to get deep, go check it out: https://vonmillhausen.github.io/sf2000/

Update the Bootloader

Apparently it is critical to do this before doing anything else, or you risk bricking the device, so do it. More information here.

The process is quick and easy if you’re starting from a working SF2000. Here are the steps, copied from here:

  1. Ensure your SF2000 is in a state where it boots normally when turned on (displays a boot logo, proceeds to the stock firmware main menu)
  2. Ensure your SF2000’s battery is fully charged (having the device power off during the patching process will likely “brick” it, rendering it inoperable)
  3. Power off the SF2000, and remove the microSD card
  4. Connect the microSD card to your computer
  5. Download this zip file: SF2000_bootloader_bugfix.zip
  6. Extract the zip file; inside is a folder called UpdateFirmware, containing a single file called Firmware.upk
  7. Copy the UpdateFirmware folder to the root of the microSD card, so that the UpdateFirmware folder is in the same place as the bios and roms folders (i.e., you’ll have an sd:/UpdateFirmware/Firmware.upk file)
  8. Eject the microSD card from your computer, and put it back in the SF2000
  9. Turn the SF2000 on; you should see a message in the lower-left corner of the screen indicating that patching is taking place. The process will only last a few seconds. If you do not see this message, and instead just go to the main menu as normal, then either this means your SF2000 has previously had the fix applied already, or you should double-check you’ve placed the patch file in the right place
  10. When the patching is complete, you will be taken to the main menu as usual
  11. Power off the SF2000, and remove the microSD card
  12. Connect the microSD card to your computer
  13. Delete the UpdateFirmware folder (it’s no longer needed)

Install Multicore Firmware

It’s tough to track down the best sources of information about this. Apparently the real activity happens on Discord and Telegram, but I’m not committed enough to go there. There is some good information here, and it looks like the most “official” builds live here, but this fairly recent youtube guide/review points to a more recent “Purple Neo” build, which I’m going to use.

Firmware installation steps:

  1. Make sure you’ve fixed the bootloader, above.
  2. Download the 10GB zip file, found here.
  3. Either use a new Fat32 formatted microSD card, or back up and format the stock microSD card as Fat32.
  4. Insert the microSD card and determine it’s name. Mine is called ‘NO NAME’.
     % ls /Volumes
     Macintosh HD	NO NAME
    
  5. (Optional) Create a zip of the original contents, I’ll store it on the desktop for now.
     % zip -r ~/Desktop/SF2000_backup.zip /Volumes/NO\ NAME/
    
  6. Use Disk Utility to erase (format) the microSD card, and choose MS-DOS (FAT) as the format.
  7. Extract the contents of the 10GB zip you downloaded earlier to the microSD card.
     % unzip ~/Downloads/PurpleNeo_Multicore_0.10_23365b6_2024-07-11_b.zip -d /Volumes/NO\ NAME/
    
  8. Do the screen tearing fix below.
  9. Eject the microSD card, put it back in the SF2000, and boot.

Improve Screen Tearing

Some games that we really enjoy like Super Mario Brothers have pretty bad screen tearing with the default firmware. Once multicore is installed we can set a config value to enable some software improvements.

This is a one-line config change fix, but if you want to see more, here’s a youtube video about it.

  1. Install multicore following the steps above.
  2. With the microSD card still inserted in your computer, open the file cores/config/multicore.opt.
     nano /Volumes/NO\ NAME/cores/config/multicore.opt
    
  3. Set the value sf2000_tearing_fix on line 9 to fast and save the file (^X if using nano).
  4. Eject the microSD card, put it back in the SF2000, and boot.

That’s it, enjoy your improved SF2000!

I Skipped the Super Bowl, but This Ad Was Made for Me

11 Feb 2025 — Truckee, CA

I didn’t watch the Super Bowl this year. I had absolutely no connection to it and no preference for either team. While I generally enjoy football (college football, specifically) and spending time with friends—and yes, the commercials and halftime show are usually worth watching, if only for the water cooler conversations—I was elsewhere.

Instead, I was doing dad things: attending Ski Team with my kids, helping with homework, making dinner, and assisting my wife in unloading a mountain of Costco supplies she’d picked up earlier that day. (I bet Super Bowl Sunday is a fantastic day to shop at Costco.)

That’s just how it is in our family—we’re more of an Olympics-watching group, and that’s perfectly fine.

So, I didn’t watch the game, but apparently I was destined to see one of the ads…

This morning, in my groggy pre-coffee state, I was reading one of my favorite newsletters, Garbage Day, and chuckled at their critique of a Super Bowl ad:

Google’s Super Bowl ad last night, “Dream Job,” depicted a dad getting ready for a job interview by talking out loud in his kitchen to an AI voice assistant, something I am very confident no one has done ever. But that doesn’t matter because Silicon Valley believes they are big enough now to create the future, rather than scale up to meet it.

I love Garbage Day—it makes me feel online and cool, with that warm fuzzy smugness we all occasionally need.

Then, not five minutes later, my dad texted me a link to the same commercial:

In case you didn’t see this… ❤️

I watched it, and damn it, my eyes started leaking. It’s beautiful.

I feel seen.

Will I ever walk around my kitchen talking to an AI assistant? I can’t say (but yeah, almost certainly). While I doubt it’ll be Google’s service, I’m firmly in the camp that believes GPT and LLMs are legitimately transformative technologies, with more useful tools being built on top of them every day.

I left my “professional” job to devote more time to my family. Currently, I’m exploring AI tools to improve my journaling, workflows, and time management. And yes, I’m about to jump back into searching for paid work that fits with a more balanced life. I am absolutely the target audience for this ad, and it hit the bullseye.

Congratulations to the Google creative team and everyone involved in making this. Even in our hyper-critical online world, the reception seems almost universally positive (even the youtube comments!). And while Garbage Day may have rolled their eyes at it, I thought it was beautiful. I guess I can’t be smug all the time.

The Countdown to Rivian "Adventure Vans" has Begun

10 Feb 2025 — Truckee, CA

As of today Rivian is opening up sales of it’s commercial van (the Amazon one) to anyone (with a business) with orders as low as a single van.

I’m excited to see businesses start picking these up, I don’t expect to see many where I live in Truckee right away; but as I think/type this I realize that maybe that’ll be proven wrong since our mountain-town gas prices make EVs a more compelling option. Too bad there isn’t a 4wd version… Yet?

EV “Adventure Vans”?

I’m even more excited about what the “adventure van” market will do with these. Do EV camper/adventure vans make sense? I’m not sure, I think so for a large chunk of what people actually use their “adventure” vans for, but probably not for what people think they’ll use their adventure vans for.

My use would be 99% in trailheads/ski parking areas within 50 mi of my house, so electric would be awesome. Overlanding to Alaska, maybe not so much.

As for the van, it’s pretty boring, if cute in that quirky Rivian way. Can’t wait to see what some customization shops come up with, even if they’re just built to show off concepts at van shows. I’ll post what I see.

Yay Cameras Update: Two Weeks In

04 Feb 2025 — Alpine Meadows, CA

It’s been two-ish weeks since started my “Yay Cameras” project. It’s been getting a couple hours of my time most days, and while the site might still look a bit rough, I’m happy with the progress I’ve made. If you haven’t already, you can check out my initial plans for the site here.

Backend Focus

Over the past couple weeks, I’ve been following my preferred path and spending quite a bit of time playing around in backend/ops land. This has involved a lot of learning and experimentation with various technologies:

  • Next.js: Yes, the frontend, but… focusing on server-side components to ensure pages are cacheable. I’m excited about the server side features, I’m aiming to build a completely cached site that can be enhanced later with frontend interactivity, and I’ve been working almost exclusively on a 99% client side React app for the last 5 years, so I want to stay in React for that stuff, for now.
  • Serverless Stack (SST) + OpenNext: SST for deploying the Next.js app on AWS using OpenNext. SST seems good, but I decided to manage other services with…
  • Terraform: To deploy DynamoDB and S3, because it’s so easy and the standard for managing this stuff.

While the frontend might still be a work in progress, the backend is up and running smoothly. I’ve been exploring the pros and cons of these various tools over the last few months (recovering from Amplify), and I’m generally happy with this setup.

Just Ship It

One of the key goals of project is the learning to ship imperfect things. It’s easy to get stuck in perfectionism, but putting something imperfect out there and then iterating on it is something I’ve been pushing myself to do. Learning in public (even though I don’t really think anyone is looking). I’m also trying to develop a habit of writing something every week, and this gives me something to talk about.

Current Features

Here’s a rundown of what I have so far:

  • Backend: A script that I run manually once a day to fetch new cameras and images from Flickr, along with product information from Amazon. All this data is stored in DynamoDB/S3.
  • Frontend: A Next.js app with an index page that picks a random manufacturer and displays a list of cameras we’ve found. Each camera has its own page displaying an image (if available) and photos taken with that camera.

All these pages are Next.js server components, so they’re cacheable and are just served directly from S3 (I think, I haven’t dug into exactly how OpenNext works). The index page is already cached, and I’ll cache the individual camera pages too, eventually.

Camera Pages + Flickr Photo Embeds

Flickr photos are marked up to display Flickr photo embeds. Each image includes metadata such as the photographer, license, title, etc., and links back to Flickr even if the embed doesn’t load. I’m particularly fond of these since I wrote the embed functionality for Flickr and I really like how I did it with progressively enhanced img tag -> sourceless iframe. Flickr went down at one point while I was working and the embeds kept chugging 🥰

Here’s an example page that showcases a nice collection of photos taken with the Sony ILCE-7: Yay Cameras - Sony ILCE-7.

What’s Next?

There’s still plenty of work to be done, especially on the design/features front. I’m taking a bit of a break for now to dig into another project, but I’ll be working on adding more camera and manufacturer information to fill in some empty spaces when I get back into it.

Overall, this project is more an exercise in getting stuff out and learning along the way. I’m not exactly embarassed by what’s there, but I want to put stuff I’m not 100% comfortable with out there rather than just keeping it in a git repo to die. More to come, I’m sure I’ll be embarassed by a lot of it.

Building Something: Yay Cameras!

15 Jan 2025 — Olympic Valley, CA

I’m working on a new personal project from a very old idea, it’s called Yay Cameras!, and it will be a site about… Cameras!

I’ve always loved cameras. I’ve never been a particularly excellent photographer, but I’ve always been interested in technology, and cameras are the coolest technology that was available to normal people like me, even before computers and video games; cameras were (are) magic.

My first digital camera was a Toshiba PDR-2. It was a crazy little thing with a PCMCIA card interface that flipped out of the back to interface with a PC, and I’m shocked that I (read: my mom) even had a computer with a PCMCIA slot to plug it into. I remember taking that camera with me on my first international trip when I was ~15, but I have no idea what happened to any photos that I took. The internet wasn’t quite ready for them at the time.

Over the subsequent years I’ve had dozens of cameras. Before our phones took over, I would wonder the camera section of the electronics store just to see what was new. The idea for Yay Cameras! is a website where cameras get “profile pages” with sample photos and videos, links to photographers who use them, suggested accessories, etc. I imagine it as a place where people interested in photography cameras can go to see what’s new, research a new camera, or connect with others for tips and advice.

Will there be interest in this? I’m not sure, but that’s not really the point. I want to build it for my own interest.

“This is my Cam!”

This project has roots in a hack day app I built back in 2012 called This is my Cam!. The app let Flickr users generate profile pages for their cameras using their uploaded photos. It was fun, simple, and surprisingly popular.

Here are some screenshots of This is my Cam!:

This is my Cam!

Unfortunately, it was a victim of its own success. The app was built with Django/Python and ran on a tiny EC2 micro instance. It needed offline jobs to fetch and process Flickr photos, and the server couldn’t handle the load when ~1000 people signed up in the first few days. I wasn’t ready to scale it up (or pay for it), so it fizzled out. I’ve used the dream of rebuilding it to explore various technologies over the years, but I’ve never gotten it back out there for public consumption.

Why build this?

I’m not looking to build a “big thing” at the moment, but I want to put something out publicly that will give me a place to play with new tech. I’m constantly building little projects to scratch itches, but they rarely go further than satisfying my curiocity of the technology to justify a “product” or even a blog post. This is my Cam! is too big, I’m not really interested in having auth, user management, permissions, and everything that comes with an app with users. For whatever reason I do really want Yay Cameras! (and This is my Cam!) to exist (I’ve held on to the domains for over a decade…), so I’m going to build something and see if it’s worth maintaining.

“MVP”

  1. Core Features:
    • A script to discover and catalog cameras by analyzing photos from Flickr explore. (daily, manual execution)
    • A database of cameras with key details and example photos.
  2. Visual Design:
    • Borrowing from the playful style of This is my Cam! circa 2012.
  3. Tech Stack:
    • Built with Next.js, hosted serverlessly on AWS using SST and DynamoDB.
  4. This Week’s Goal:
    • Get a daily process running to update the camera database.
    • Deploy a basic frontend to yaycameras.com to browse the collection.

What’s Next

I expect the backend to come together quickly, I’ve done most of the groundwork in the little projects I mentioned. I imagine the frontend is where I’m most likely to fall into rabbit holes with many new things to explore and learn. I’ll probably make it extremely simple and ugly in this pass.

This isn’t a startup or a grand vision. It’s just a site I want to build because cameras are cool, and I think others might think so too. If you’ve got ideas or suggestions, I’d love to hear them.

Useful LLM AI: First Day With an AI Assistant

10 Jan 2025 — Olympic Valley, CA

What am I talking about?

Watch this video

Earlier this week, I checked a months-overdue item off of my to-do list and set up a very basic “AI Personal Assistant” with Dan Catt’s Kitty AI. In its current form, it simply uses ChatGPT 4 to generate a set of three morning questions, then saves my answers and feeds those back into the prompt on subsequent days. This allows each day’s new questions to have some context of my previous answers, and therefore my emotional state, what I’m working on and hoping to achieve, if I’ve been setting aside time for exercise and rest, etc.

So far, I think it is shockingly great, and I am incredibly excited to use this tool to track what I’m doing, give myself accountability, and develop a routine. Obviously, a script backed by an LLM isn’t going to magically make those things happen, but I think it just might be the right thing to help me make those things happen. It’s already made a huge impact on my productivity and what I’ve chosen to do with my time over the last two days (I know, two days… but I’m optimistic!).

Of course, this tool is in the context of what I’m trying to improve personally: mindfulness, organization, goal-setting, and work/personal life separation.

First Day Experience

On my first day using Kitty, it asked me a question about my emotional state, which I answered with something like:

“I’m feeling anxious that I won’t accomplish everything I hope to today, and rushed because I need to get treats to Lucas’ school for his birthday.”

I was wrapping up getting Kitty running, really wanted to get it done that day, and stressed that I wouldn’t. Another question was about what type of short break or activity I could include in my day to recharge, to which I said:

“It would be nice to get a few ski runs in but I don’t know if I’ll have the time.”

As I rushed to Lucas’ school to deliver his birthday treats, I turned these answers over in my head. I had initially planned to drop treats off at the school, then head back to a cafe and spend a few more hours on the computer before returning to pick Lucas and Isaac up from school. However, I realized that I would really get more value out of focusing on my son’s birthday and celebrating that, than whatever I might accomplish with two more hours staring at the computer.

Instead of rushing in and out for a quick birthday celebration, I signed Lucas and Isaac out for the remainder of the day (a totally normal thing to do; half the school leaves at noon for ski teams), and spent the afternoon skiing with them. I got the activity I needed and made my sons’ birthday week that much more special.

Would I have done the same without the questions, responses, and post-thought? I don’t know, but I really like the feeling of stopping for a few minutes each morning and being thoughtful, not just rushing into my to-do list.

Getting Started with Kitty

08 Jan 2025 — Incine Village, NV

Hello Kitty!

I’m cheating a bit and writing/posting this a day late, but Kitty is set up! The process was super easy, although I now understand that what Dan has shared on GitHub is a very simple example of what Kitty or an AI PA can do, and not the full extent of what he is doing… which makes complete sense, as people are different, this is a personal project, and it would take a massive product team to try and make it work for everyone. (and then it would also probably suck for everyone)

My first three morning questions from Kitty:

"What emotions are setting the tone for your morning, and can you identify their sources?"

"In consideration of your physical and emotional state, what's the key thing you would like to achieve today?"

"As you envision your day, is there any particular activity or short break you could incorporate to recharge?"

All three are clearly inspired by but not directly pulled from the default question set, and having been run through GPT they feel much more natural than the defaults. Weird, but huge thanks to Dan for being miles ahead of me in his thoughtfulness towards what to ask to help one write useful answers.

Tomorrow I’ll write more about how these initial questions immediately impacted my day, and some other thoughts about how I want to use Kitty in my day-to-day life.

Here are my notes on setting up Kitty, in case someone happens to be here wanting to set up their own. Dan’s readme is thorough, but there were a few things that were slightly more involved for me because I hadn’t already set up an OpenAI Platform account.

Setup Steps

  1. Install kitty terminal. This is not required, you can run the AI PA in any terminal.

While it’s not mandatory to install the kitty terminal app, I’m doing it because I’ve never used kitty, and I want to see how it compares to the iTerm2 quake-style dropdown terminal I’ve been using for the last n years.

kitty binary installation instructions

Stopping myself from going down a rabbit hole of exploring kitty, I’m going to use it as-is after install and drop this thread here to explore customizing as a replacement for iTerm later.

  1. Fork and clone Dan’s Basic Kitty Journaling repo to your local machine.

  2. Set up an OpenAI Platform account and add some credits. I initially just created an API key, but it didn’t work for GPT-4 without credits, and I had to create a new API key after adding credits, the existing key didn’t automatically start working.

  3. Follow the rest of the instructions in the installation section of the readme.

Working Towards Sustainable Indepence

07 Jan 2025 — Alpine Meadows, CA

Getting Started

This will have to be short, but I need to get started.

Stepping Away from Scenery

In June 2024, I stepped away from my latest professional role as co-founder and head of engineering at Scenery. It was a friendly parting of ways, with 5 years of work completed and a great person available to step in. I knew that I wouldn’t be able to give family and the company the time and focus they each needed through the summer, and I didn’t want to do both poorly.

Later, while describing some issues that were concerning me, including heightened anxiety, poor sleep, and others, my wife pointed me to a few detailed descriptions of burnout. I didn’t realize there were so clearly defined definitions, but yep, that was me; juggling family, startup work, and trying to find time for physical activity was proving to be too much for a bit too long.

Summer Break

The summer was great. I stayed away from the computer, spent a bunch of time outside, and had a wonderful time not feeling stressed and enjoying family. Yay!

Returning to Routine

With the start of school in August, I settled into the routine of getting the kids to school, then sitting down with at the computer for a few hours with the goal of exploring and educating myself on tech that I wasn’t able to pursue while working on Scenery. I’ve pretty successfully done this, but haven’t produced much more than a bunch of exploratory projects (and a bunch of learning on what works and what doesn’t). I still want to roll the hundreds of hours of exploratory work into a shippable project or two, but I’ve found that I need to focus on structure now to give myself some accountability and track what I’m doing.

I also had the opportunity to contract with the Scenery team as they transitioned into new roles with Adobe, which was awesome and I’m so happy for the outcome and to be able to close that chapter with the team.

Goals and Plans

But now, for that accountability and tracking.

My Goal

Spend ≤ 50% of my time each day on the computer producing digital work, and the rest on family and healthy active activities. Ideally both sides of the day can also generate income. Contracting is always an option, but my (digital) goal is for personal projects to produce income as well.

Unfortunately, I’ve always been more driven by curiosity and learning than income generation, so I have some growth to do to stop stopping when my questions are answered and turn my work into finished projects.

Immediate Plans

My to-do list has had a bullet of “set up AI assistant” on it for a few months now, so I’m going to do that. This journal post is an attempt to force myself to follow through. I don’t think anyone will read it, but I’ll know it’s here.

I’m inspired by Daniel Catt, and have been impressed with his explorations of how to best use different forms of social media, video, and writing; and now the addition of AI to track projects, etc. I’m going to see if his “Kitty” assistant can be a useful part of my workflow. My next step is to set that up. Hopefully, I don’t waste too much time coming up with a clever name.

My 50% computer time is nearly up, and I have a Ski Coaching clinic to get to. I guess I can call today a success?

Tomorrow

Write an update on setting up my version of Kitty.

Serverless Auth: OpenAuth on AWS with SST

17 Dec 2024 — Truckee, CA

I’ve been playing with a bunch of “serverless” solutions lately. I used Amplify to configure and bootstrap various AWS services for Scenery and the projects that preceeded it, and I’d say it provided equal parts convenience and frustration.

As a learning exercise, I recently used Amplify gen2 to deploy an OIDC wrapper around the Flickr API as a next.js app. It worked, and served it’s purpose of refreshing my memory on how OAuth2 and OIDC work while giving me an excuse to play with next.js, but I wouldn’t use the final product for a production application. It also convinced me that it’s time to try out SST as a replacement for Amplify. Let’s see if we can find a bit more convienience and less frustration.

Enter SST’s OpenAuth. Let’s give it a try as an auth service and explore SST in the process.

I’m going to depoy OpenAuth as it’s own standalone service on AWS using SST.

Setup SST

  1. If you haven’t used SST before, read through the workflow and configure your IAM credneitals.
  2. Create a new directory for our project: mkdir openauth && cd openauth.
  3. Initialize SST: npx sst@latest init - use the default options: vanilla template and aws.

Add the OpenAuth Component

In the run function of sst.config.ts add:

const auth = new sst.aws.Auth("MyAuth", {
  authorizer: "src/authorizer.handler"
});

Authorizer

  1. Add OpenAuth npm package: npm i @openauthjs/openauth.
  2. Create a file at src/authorizer.ts: mkdir src && touch src/authorizer.ts.

This authorizer is based on the lambda example, with a few changes because we’re using the auth component which creates the dynamodb table and links it to the authorizer automatically.

import { authorizer } from "@openauthjs/openauth"
import { handle } from "hono/aws-lambda"
import { subjects } from "../../subjects.js"
import { PasswordAdapter } from "@openauthjs/openauth/adapter/password"
import { PasswordUI } from "@openauthjs/openauth/ui/password"

async function getUser(email: string) {
  // Get user from database
  // Return user ID
  return "123"
}

const app = authorizer({
  subjects,
  providers: {
    password: PasswordAdapter(
      PasswordUI({
        sendCode: async (email, code) => {
          console.log(email, code)
        },
      }),
    ),
  },
  success: async (ctx, value) => {
    if (value.provider === "password") {
      return ctx.subject("user", {
        id: await getUser(value.email),
      })
    }
    throw new Error("Invalid provider")
  },
})

export const handler = handle(app)

Subjects

  1. Add the valibot npm package: npm i validbot.
  2. Create a file at src/subjects.js: touch src/subjects.js.
import { object, string } from "valibot";
import { createSubjects } from "@openauthjs/openauth";

export const subjects = createSubjects({
  user: object({
    id: string(),
  }),
});

Run SST dev mode

Run SST dev to configure AWS resources and run the app in dev mode.

npx sst dev

Once SST has deployed all of the AWS resources needed for OpenAuth, it will output a URL you can use to test the authorizer. It will look similar to this:

https://[uuid].lambda-url.us-west-2.on.aws

Add /.well-known/oauth-authorization-server to the url and open in a browser to test that your OpenAuth service is up and running. You should see something like this:

https://[uuid].lambda-url.us-west-2.on.aws/.well-known/oauth-authorization-server

{
  "issuer":"https://[uuid].lambda-url.us-west-2.on.aws",
  "authorization_endpoint":"https://[uuid].lambda-url.us-west-2.on.aws/authorize",
  "token_endpoint":"https://[uuid].lambda-url.us-west-2.on.aws/token",
  "jwks_uri":"https://[uuid].lambda-url.us-west-2.on.aws/.well-known/jwks.json",
  "response_types_supported":[
    "code",
    "token"
  ]
}

Congratulations, you have successfully set up a basic OpenAuth service! You should now be able to set up an Auth Client to use with your shiny new OpenAuth server.

©2025 Chris James Martin