Over the summer I read Ross King’s Defiant Spirits: The Modernist Revolution of the Group of Seven (2010), which is an excellent book on the Group. It really brings the individuals to life: too often now they are made out to be an undifferentiated mass of seven (or ten men) who all did the same thing, but of course they were each different. I think the one I’d most like to meet is J.E.H. MacDonald, who seemed to have an awful lot of fun. “My religion is the Arts and Letters Club,” he said.
A quote from chapter 35 of The Duke’s Children (my copy is the new expanded edition) by Anthony Trollope. The Duke of Omnium is talking with a few other men after a small dinner party. Silverbridge, his son (known by a courtesy title), offers a thought about the House of Commons.
“I hear men say that it isn’t quite what it used to be,” said Silverbridge.
“Nothing will ever be quite what it used to be. There will always be changes.”
A Mastodon note last night (I still can’t bring myself to say “toot”) from @teledyn@mstdn.ca pointed out the HTTP Observatory, which “enhances web security by analyzing compliance with best security practices.” You put in a URL and it gives you a score on the site’s security, with tips on how to make it better.
I put in this site and got 65/100. With a bit of work I got to 115/100! I was pretty pleased about that. Then I discovered I had broken some small things because the settings were too restrictive. I fixed that and now I get 125/100!
Here are some notes about that, in case it’s useful to me in the future or anyone else before then. Note: I am no expert, I’m not even a web developer any more. I’m just a person running a static web site built with Jekyll who hacked on some HTTP headers.
Headers
Header information is normally used by your browser to understand what it should do with the web page content that follows. It’s rarely of interest to people unless they’re doing something technical. If you run curl --head https://www.miskatonic.org you can see only the headers for this site, which look like this:
(No cookies are set because there are no cookies. There is also no logging. I know nothing about anyone who looks at this site.)
Everything up to Content-Length is generated by the web server, which here is Apache hosted on Pair. Everything after that is set by me. Here’s a rundown. Browse Practical security implementation guides for more about all this.
X-Clacks-Overhead
Header set X-Clacks-Overhead "GNU Terry Pratchett"
(This is how I set the header in the .htaccess file where I can configure things. You might set it differently in your situation.)
This remembers Terry Pratchett. See X-Clacks-Overhead. This has nothing to do with security; I just think Terry Pratchett is one of the greatest writers of the last fifty years.
Now, I don’t have any JavaScript running on my site, so it’s not going to be a problem. Well, I don’t have any of my own JavaScript, but I do sometimes embed a YouTube video, such as in this post about Molly White’s Become a Wikipedian in 30 Minutes video. To make that work I have to have that special frame-src configuration.
As I understand it, default-src being ‘self’ means that by default, while looking at this site the browser should only load resources from this site, unless otherwise specified. Having style-src as ‘unsafe-inline’ means I can set CSS inline, specifying font-size: smaller right in a web page. It should be in a CSS file, and maybe I’ll fix that one day, but I’m not going to fuss about it right now. In media-src there’s an extra data: that says it’s all right to load media (such as images) that are specified right in the HTML with a data URL. I had to add this to do away with a warning, which confused me because I never load any media this way. Maybe I’ll figure it out later.
Using Firefox’s developer tools was a huge help in figuring all this out. I’d load the site, hit Ctrl-Shift I to pop it up, go to the Console, and reload over and over as I tweaked settings. (Whether or not JavaScript was enabled changed things, so test with it off and on. Privacy settings and add-ons may also affect things.)
This isn’t as tightly locked down as it could be, but it’s much better than it was before.
X-Frame-Options
Header set X-Frame-Options DENY
X-Frame-Options helps prevent clickjacking, as does the frame-ancestors setting above. This option is obsolete, but so what.
Referrer-Policy
Header set Referrer-Policy no-referrer
With this Referrer-Policy setting, if someone follows a link from my site to another there is no header passed along telling the destination site that they came from here. This could be needed in some situations, but not here.
X-Content-Type-Options
Header set X-Content-Type-Options nosniff
X-Content-Type-Options tells the browser “not to load scripts and stylesheets unless the server indicates the correct MIME type.” It’s nosniff because it stops the browser from content sniffing.
Cross-Origin-Resource-Policy
Header set Cross-Origin-Resource-Policy same-origin
Cross-Origin-Resource-Policy “lets websites and applications opt-in to protection against vulnerabilities related to certain cross-origin requests.” Setting it to same-origin means “limits resource access to requests coming from the same origin. This is recommended for URLs that reply with sensitive user information or private APIs.” I’m not doing anything that even needs to worry about this, so I set it to the most secure option, because it will never matter.
Strict-Transport-Security is an important one: it says that the site should only ever be accessed with HTTPS. The age setting is in seconds, and it equals one year. I don’t know if it matters any more, but you can add your domain to the HSTS Preload List as well.
Wow! That looks very long and rather suspicious. The URL has a lot of domain names (such as chrt.fm) in it. What’s going on? The best tool to find out is curl. Here’s what I ran at the command line, with the output:
The --head option has curl report the HTTP header (mostly metadata) instead of downloading anything. Notice the HTTP 302 response and the location line. That tells the client (browser or podcast program or whatever) that it should look to this URL to get what it wants. But if we get that URL …
Another location header, still containing a lot of things that look like domain names in it. It’s going to take a while to get to the bottom of this.
I wrote myself a little function (for bash, the shell I use), so I can supply a URL and get the location header back.
$ function get_location() { curl --silent --head "$1" | grep location | sed 's/.* //'; }
$ get_location "https://pdst.fm/e/chrt.fm/track/47G541/pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748"
https://chrt.fm/track/47G541/pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Nice!
Next I wanted to write a short script that would work through all these URLs until it got to the bottom. I spent five or ten minutes on this but ran into some kind of problem with checking if a string was empty or something like that. I thought, “Wait a minute. Simon Willison would use an LLM to do it for him!” So that’s what I tried, with DuckDuckGo’s Llama 3.
I told it:
Write a bash shell function that uses curl to find if a given URL has a location header, and if so, return that location URL.
It gave me:
get_location_header() {
local url="$1"
local location=$(curl -s -I -o /dev/null -w "%{http_code} %{redirect_url}" "$url")
local code=$(echo "$location" | cut -d' ' -f1)
local redirect_url=$(echo "$location" | cut -d' ' -f2-)
if [ "$code" -eq 301 -o "$code" -eq 302 ]; then
echo "$redirect_url"
else
echo "No location header found"
fi
}
I said:
It is possible that running that function on a URL returns a URL that itself returns a location header. Use that function to write a recursive shell script that works through multiple levels of location headers until there are no more.
It gave me:
follow_redirects() {
local url="$1"
local max_redirects=10 # prevent infinite loops
while [ $max_redirects -gt 0 ]; do
local location=$(get_location_header "$url")
if [ "$location" != "No location header found" ]; then
url="$location"
max_redirects=$((max_redirects - 1))
else
echo "Final URL: $url"
return
fi
done
echo "Error: too many redirects"
}
I tried that and it worked, but not how I wanted. I said:
That works, but make it show each URL as it recurses.
follow_redirects() {
local url="$1"
local max_redirects=10 # prevent infinite loops
while [ $max_redirects -gt 0 ]; do
echo "Checking URL: $url"
local location=$(get_location_header "$url")
if [ "$location" != "No location header found" ]; then
url="$location"
max_redirects=$((max_redirects - 1))
else
echo "Final URL: $url"
return
fi
done
echo "Error: too many redirects"
}
I said:
That works nicely.
I'm glad to hear that! The `follow_redirects` function should be helpful for tracing the chain of redirects for a given URL. If you have any more questions or need further assistance, feel free to ask!
Here’s how to run it all in a bash shell:
$ function get_location_header() {
local url="$1"
local location=$(curl -s -I -o /dev/null -w "%{http_code} %{redirect_url}" "$url")
local code=$(echo "$location" | cut -d' ' -f1)
local redirect_url=$(echo "$location" | cut -d' ' -f2-)
if [ "$code" -eq 301 -o "$code" -eq 302 ]; then
echo "$redirect_url"
else
echo "No location header found"
fi
}
$ follow_redirects() {
local url="$1"
local max_redirects=10 # prevent infinite loops
while [ $max_redirects -gt 0 ]; do
echo "Checking URL: $url"
local location=$(get_location_header "$url")
if [ "$location" != "No location header found" ]; then
url="$location"
max_redirects=$((max_redirects - 1))
else
echo "Final URL: $url"
return
fi
done
echo "Error: too many redirects"
}
$ follow_redirects "https://pdst.fm/e/chrt.fm/track/47G541/pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748"
Checking URL: https://pdst.fm/e/chrt.fm/track/47G541/pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://chrt.fm/track/47G541/pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://pscrb.fm/rss/p/mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://mgln.ai/e/433/claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://claritaspod.com/measure/traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://traffic.megaphone.fm/QCD6489207092.mp3?updated=1717391748
Checking URL: https://dcs.megaphone.fm/QCD6489207092.mp3?key=8cd61d704fe268b21da14bc81f17f7ba&request_event_id=9706319f-431f-465d-a85d-d55053cbb184&timetoken=1720741181_4922C6F958D58D4C0673A8DE2476B183
Final URL: https://dcs.megaphone.fm/QCD6489207092.mp3?key=8cd61d704fe268b21da14bc81f17f7ba&request_event_id=9706319f-431f-465d-a85d-d55053cbb184&timetoken=1720741181_4922C6F958D58D4C0673A8DE2476B183
I knew I had to use function to make bash know these were functions, but I could have told Llama to give me something I could copy and paste right into the shell. Also, it’s using short options for curl (such as -I) where I prefer long ones (such as --head) in anything scripted, because it helps readability. I could have told it to change that as well.
It works very nicely, and what’s more, the scripts are good. Better than anything I could hack in a few minutes, and I’ve never used local in a shell script in my life, but it seems like a good practice.
Now I can feed that into my audio player and finally listen to episode 202 of Philosophize This! I see it’s thirty-seven minutes long, which is shorter than it took me to write this.
The scrollbar in Firefox disappears when the pointer isn’t moving. For example, the top right of this site looks like this.
If I touch the touchpad, it appears and looks like this. It’s very small.
Thanks to this note by @mgorny@treehouse.systems I learned how to make it better. Go to about:config (and agree to the danger) then set widget.gtk.overlay-scrollbars.enabled to false and widget.non-native-theme.scrollbar.size.override to 24 or so. The scrollbar becomes permanent and clearly visible. I like this for a several reasons, a main one being that it’s always obvious how long a page is and where I am in it.
That’s far more technical than it should be to tweak this, but Firefox has bigger problems to fix (such as not getting bogged down in AI nonsense).
From The Fall of the Sparrow (1955) by Nigel Balchin. This scene happens in 1935 in England; the narrator is talking to people going to protest the British Union of Fascists.
Leah sat up and said, “Listen …” and then paused and closed her eyes again. There was a moment’s silence. Then she said quietly, “It’s like this. The Fascists go down there every Sunday, and a good many week days as well. They choose the places where they know there are a lot of Jewish people. They get up and they preach hatred of the Jews, and all that crazy Hitler stuff about their being responsible for everything. If a Jewish person passes he’s insulted, and if he says a word in protest he’s manhandled. Or she, if it’s a woman. Of the people listening, ninety per cent don’t agree with what’s being said. But they’re not organized and the Fascists are. People aren’t going to risk being knocked about. So they keep quiet and let the Fascists talk.”
I said, “But if they don’t agree …”
“Wait a minute. By the time this has happened week after week and the people see that no one stands up to the Fascists, they begin to think that nobody can—that it’s all hopeless. And since everybody likes to be on the winning side they begin to wonder whether the safest thing isn’t to start wearing a black shirt themselves. See?”
Jason said, “It’s a bluff, you see, Henry. Of course Fascism always is. After all, Mussolini’s march on Rome …”
“It’s a bluff,” Leah said curtly. “But it’s a bluff that’s got to be called everywhere, every time it’s tried. Every time. Everywhere.”
They work with the new Reminder App, which doesn’t look at all useful, even to people who want to mingle paper and digital scheduling. You’re locked into their app—there’s no integration with other calendar systems—and unless I misunderstood the video, after you scan in the QR code for a given day you still need to specify the date in the app.
Some highlights from the privacy policy for the app, which, remember, is scanning your appointments and your handwriting:
1.1 Account Information: To use the Filofax Diary app, you will need to create an account. We collect information such as your name, email address, and password.
1.2 Diary Page Information: When you use the app to create reminders associated with diary pages, you may provide images of diary pages and set reminders, times, and dates. These images and associated data will be stored on our servers.
2.3 Research and Analytics: We may use your information for research and analytics purposes to improve the functionality and features of the app.
3.1 Third-Party Service Providers: We may share your information with third-party service providers that help us with app functionality, hosting, and analytics. These providers are bound by confidentiality agreements.
3.2 Legal Requirements: We may disclose your information if required by law, regulation, or legal process.
3.3 Change of Ownership: In the event of a merger, acquisition, or sale of assets, your information may be transferred to the acquiring entity.
4.1 Security We take reasonable measures to protect your information from unauthorized access, loss, misuse, or alteration. However, no method of transmission over the internet or electronic storage is 100% secure, and we cannot guarantee absolute security.
The privacy policy here doesn’t match with what’s on Google Play, which says no data is collected or shared. (The CCPA clauses in the policy will make everyone outside California look admiringly.)
Even were this app the most secure and private thing possible, I think there will be very, very little interest in it. Are there any Filofax users who have a need for this? I hope they have the regular inserts for sale by October.
Last month I got a new phone and installed GrapheneOS on it. It’s working wonderfully and I’m happy I did it. Here are a few notes.
Background
GrapheneOS is a privacy-focused phone operating system based on Android. I’ve used Android-based phone OSes before. I bought a Samsung Galaxy S III in 2012, and by 2015 it was getting a little old (Samsung had stopped updating it) so I installed CyanogenMod on it. “It’s like having a new phone,” I wrote. About eighteen months later I realized, “but one that isn’t as good as my old phone,” so I installed LineageOS on it, and said, “It really is like having a new phone.” Both OSes let me keep more up to date with Android and they had no vendor applications installed. Lineage was nice.
By 2019 the phone was seven years old and barely functioning. I had to carry a USB battery with me so I could recharge it every few hours, and it would sometimes spontaneously reboot while I was in the middle of a phone call. I bought a Moto G7 Play, which was reasonably priced and didn’t do anything special. Getting a new phone after seven years was fantastic: everything worked, and fast!
The new phone had some Motorola junk on it, which I ignored, and of course had the full Google suite of everything. I removed some applications and took other steps to have as much privacy as I could manage, but this was an increasing concern for me. After five years the phone was showing its age and there were no more operating system updates for it. It was past time for a new phone.
I’d been watching GrapheneOS for a while and had decided my next phone would run it, which meant I’d need to get a Pixel from Google—the current release is a Pixel 8, which I bought at a store. (Happily I didn’t need a new SIM card.)
Why run GrapheneOS on it? Because I wanted as much privacy and control as possible. I was guided by Michael Bazzell’s Extreme Privacy: Mobile Devices, where he says:
I believe GrapheneOS is the ultimate solution for our needs. It is the only option which meets all of my requirements, including the following.
It is completely open-source software which converts a traditional Google Pixel device into a pure native Android environment, including many enhanced privacy and security features, without any Google network services or connections.
It has a large community testing and verifying any changes, and updates are much more frequent than other builds.
It provides only the basics and allows you to customize the software you need.
It has a locked bootloader and does not require root access.
It allows sandboxed Google push services if appropriate for your needs which can easily be disabled or removed completely if desired.
It does not require microG for notifications.
I recommend buying the entire set of Bazzell’s books. (I hope he restarts the Intel Techniques podcast one day. It was great.)
My biggest worry (aside from bricking the phone) was that I wouldn’t be able to run Cisco’s Duo authentication app, which I need for work.
As GrapheneOS recommends, I started up the phone out of the box, skipped over everything about logging in to Google, and upgraded the system and enabled developer mode. Then I used the web-based installer, which works like magic. I had to use my work laptop because it has Edge on it, which I needed to make the USB stuff work, but basically I plugged my new phone into my laptop, pushed a few buttons on a web page, and in a few minutes I had a new operating system on my phone. Compared to what I did years ago this is an unbelievable dream!
After that I followed along with Bazzell’s book for some basic configuration. Some points from my notes also include:
Configuring the phone to always use the same MAC address when on my home network (“This can be controlled per-network in Settings > Network & internet > Internet > NETWORK > Privacy”), and configuring my router first to always give that MAC address the same IP number, then to always route it through the VPN I use on the router.
Installing Google Play Services: I want to be able to receive phone calls and push notifications.
Installing Aurora Store through F-Droid, so I can anonymously install apps from the Play Store. This worked the first time for me, with no problems. I installed Duo Mobile and it worked: phew! With that done I knew everything would be okay now. I also installed Signal and Firefox here.
Copying and restoring configurations and data from my old phone, including a backup of all my Signal chats.
I may write up how I backed up and restored my contacts and other settings with Termux, but I’ll stop here for now, with GrapheneOS installed and working.
So far
The phone is wonderful! Everything is working perfectly. I installed fewer applications on it, and I’m using it much less. When I do use it, I know it’s safe and secure. GrapheneOS is giving me a great user experience. Many thanks to everyone who works on it! I made a donation to the project and will do another soon.
One small feature that’s great to have back (Lineage had it) is scrambling the PIN input on the lock screen. Instead of having the standard numeric grid layout it mixes up the numbers each time, so anyone glancing over your should will find it much harder to see the PIN.
If you’re thinking about installing GrapheneOS but a bit worried something might go wrong, don’t be. Go for it.
I just read Verified: How to Think Straight, Get Duped Less, and Make Better Decisions About What to Believe Online by Mike Caulfield and Sam Wineburg (University of Chicago Press, 2023) and I really liked it. This is a very good introduction to information and digital literacy, well written with a lively tone. I recommend it to anyone interested in assessing online information—from librarians doing IL to people wondering about apparent misinformation or vaguely conspiracy-minded links they get sent.
It uses the SIFT approach: Stop, Investigate the source, Find better coverage, and Trace back to the original context. (This is intentionally not a checklist like the PARCA test). There are many examples, illustrated with lots of images of web pages and Google searches. Someone new to this would learn good practices, and someone experienced would see a well-presented approach that gives insightful coverage of familiar issues. It’s good on the inherent “compellingness” of some web sites and sources, and how to do a vibe check. The chapter on Wikipedia is very good, and I like how they depend on Wikipedia as the best place for quick checks on the reliability of sources such as journals and think tanks. It is.
The book is grounded in an assumption of good faith—that someone wants to determine the truth and validity of some information they see online. Many do. But many don’t. If they don’t, they’re not going to read a solid, practical introduction like this. On the other hand, if you’re helping or teaching high school students or undergraduates, you have an ideal audience for this book.
There’s a postscript about ChatGPT and large language models, which went big just as the book was being finished. It’s as good as anyone could manage given the timing, but how well will the SIFT approach, or PARCA, or anything else, work if (when) everything is obscured by torrential amounts of bullshit? (Except Wikipedia, I hope.)
There’s a good review by Joel Blechinger in The Journal of Radical Librarianship (vol. 10, 2024) that goes deeper into some issues. See also this thoughtful review by Stephen Michaels in College & Research Libraries (vol. 85 no. 2, 2024).