Under pressure from US federal law, all Russian contributors have been removed from the Linux maintainers file. Read the thread here
-
A very simple weather application for the terminal
There are many ways to check the weather on my computer but all are slightly inconvenient. I’d rather not open a browser when I don’t need to, and the very popular wttr.in includes tons of information and ASCII art that I don’t need.
The following script can be called from the command line to pull data from pirate-weather.apiable.io, a free service fully compatible with the once popular and now defunct Dark Sky Forecast.io API.
This script simply lists (see screenshot) the current temperature and conditions, as well as an hourly forecast with the same information over the next 24 hours.
This was thrown together unpolished for personal use, and so for example usage of Fahrenheit is assumed. This can be easily modified as needed.
#!/usr/bin/env bash #~rts/oarion7 # Takes 1 optional argument N number of hours to forecast (max 48) myapikey='' # Get one from: https://pirate-weather.apiable.io/ coordinates='40.713914010140385,-73.98905208987489' # A good place to get slapped hard in the face custom_number='24' # No. of hours if not specified in arg. Comment to default to API (48). if [[ -n "$1" ]] && [[ ! -z "${1##*[!0-9]*}" ]] && [[ "$1" -le 48 ]] ; then custom_number="$1" fi data="$(curl -LSs "https://api.pirateweather.net/forecast/${myapikey}/${coordinates}")" [[ -z "$data" ]] && { echo "Error fetching API data."; exit 1; } date="$(date -d "@$( jq '.currently.time' <<< "$data" )" +'%A, %d %B %Y %H:%M' )" printf "\n" printf '%s\n%s \n\nCurrently %s F and %s' \ "Weather Report by Pirate Weather API" "$date" \ "$(jq '.currently.apparentTemperature' <<< "$data")" \ "$(jq '.currently.summary' <<< "$data" | sed 's/\"//g' )" printf "\n\n" forecast="$(jq '.hourly[]' <<< "$data" | egrep -v '^\"' )" if [[ -n "$custom_number" ]] ; then readarray -t times < <( echo "$forecast" | jq ".[:$custom_number] | .[] | .time" ) else readarray -t times < <( echo "$forecast" | jq '.[] | .time' ) fi readarray -t summaries < <( echo "$forecast" | jq '.[] | .summary' | sed 's/\"//g' ) readarray -t temps < <( echo "$forecast" | jq '.[] | .apparentTemperature' ) tabs 3 for i in $(seq 0 $(( ${#times[@]} - 1 )) ) ; do printf '%s\t\t%s\t\t%s\n' "$(date -d "@${times[$i]}" +'%H:%M')" "${temps[$i]} F" "${summaries[$i]}" done printf "\n"
At a glance, I know before heading out for the day if I’m properly dressed and if I need to bring an umbrella, and that’s exactly all I need.
-
Installing Linux on a 2008 Intel-based iMac
This weekend I decided to install Linux on an early 2008 24-inch iMac (3.06 GHz Intel Core 2 Duo).
This machine was at some point upgraded to a 1TB HDD with 4 GB of RAM, but has been stuck on an outdated version of Mac OS X and thus for example limited to very outdated versions of Firefox. All in all, it’s functioned in recent years as little more than an elegant, oversized paperweight.
As the terabyte of storage was mostly unused, and it may be useful at some point to access the MacOS installation, I decided to build a dual boot setup.
I followed this guide to create the new root and swap partitions ahead of time from inside MacOS rather than in the Ubuntu/Mint installer, as well as downloading and installing rEFInd. This machine was running MacOS 10.9.5, pre-dating the introduction of System Integrity Protection (SIP) and thus there was no need to boot into Recovery mode to disable it.
While I tinkered with Macintosh computers in my childhood during the PowerPC era and use modern Apple silicon at work, I’m less familiar with Apple’s 14-year Intel era between the two, and I’ve never installed Linux on such a machine before. I was therefore relatively surprised as to how seamless the process was.
I opted to stick to territory as familiar as possible, and used a USB stick I had already recently used to install Linux Mint MATE Edition 21.1 Vera (based on Ubuntu 22.04 LTS). Everything worked out of the box (apart from WiFi until the automatically recommended Broadcom drivers were added) and, a bit more critically, proper fan control.
Just don’t let it overheat
This machine will run very hot on Linux without some mitigation, typically provided by either macfanctld or mbpfan, both available in the Ubuntu/Mint repositories, but primarily targeting MacBooks.
I first installed mbpfan via apt, as this one is more actively maintained. However, the machine ran quite a bit warmer than I’d expect and, I’m not sure the fans were running at all despite confirming the applesmc and coretemp kernel modules were loaded.
After some research, I removed mbpfan and opted for Anton Lundin’s fork of macfanctld, specifically modified to enable control of all 3 fans on the iMac:
git clone --single-branch --branch fan3 https://github.com/glance-/macfanctld.git sudo apt build-dep macfanctld cd macfanctld/ make sudo make install
I then ran sudo macfanctld -f and immediately noticed a difference. To automate this, I created a systemd service. I created shell script /opt/root-launches-macfanctld with contents:
#!/bin/sh /usr/sbin/macfanctld -f
Then I made that file executable, and created another file named /etc/systemd/system/macfanctld.service:
[Unit] Description=Mac Fan Control Daemon Documentation=man:macfanctld(1) [Service] Type=idle ExecStart=/opt/root-launches-macfanctld Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target
Finally, the service can be started and enabled, and thus launched by root at system startup moving forward:
sudo systemctl start macfanctld sudo systemctl enable macfanctld
Finally, just for fun, I installed a simple dock as well as Amiga icons and metacity themes.
I also installed Mikhail Shchekotov’s “flying toasters” screensaver, a more faithful port the classic After Dark screen saver than that long bundled with xscreensaver.
Addendum
Performance has been very impressive considering the age of this machine. It’s definitely usable for browsing the web and playing music and video. I also did some writing and edited some icons in GIMP and the interface was surprinsgly snappy. Nonetheless, I’ve found one more issue.
When using ethernet, the wired connection becomes disabled after waking from suspend. After running sudo lshw -C network, I discovered I am using the sky2 driver. A few solutions/workarounds are possible.
In my case I added pci=nomsi,noaer to the GRUB_CMDLINE_LINUX_DEFAULT line in file /etc/default/grub and then ran update-grub. After rebooting, I confirmed this resolved the issue for me.
Another workaround is simply to remove and re-add the module, i.e.
sudo modprobe -r sky2 && sudo modprobe sky2
The latter could then be toggled manually or triggered by wake events in a systemd service or udev script.
-
Search for Rumble videos in the terminal
Rumble has some documentation suggesting the existence of an official API, but it appears all requests for API keys have fallen on deaf mailboxes. This is working, for now:
#!/usr/bin/env bash #rts/oarion7 # Search for videos on rumble.com, returns url(s) for selected results; tab key to select multiple # Dependencies: xmllint, rlwrap, jq, fzf https://github.com/junegunn/fzf # Needs a recent enough fzf to support {n} placeholders and the become() event # Recommended: https://github.com/yt-dlp/yt-dlp/ history_file="$HOME/.config/rumble_search_history" UA=$(<"$HOME/scripts/UserAgent/string.txt") #Spoof an updated UA string just in case. if [[ -z "$@" ]] ; then printf '\n%s\n' '=>> Search for Rumble videos' #read -p "> " readitem readitem=$(rlwrap -S '> ' -H "$history_file" -o cat) else readitem="$@" fi query="$( printf '%s\n' "$readitem" | tr -d '\n' | tr -d '!' | tr -d "'" | jq -sRr @uri )" [[ -z "$query" ]] && exit html=$(curl -A "$UA" -Ss "https://rumble.com/search/all?q=$query") [[ -z "$html" ]] && { printf '%s\n' "Error fetching results." ; exit 1; } list=$(for i in $(seq "$(printf '%s\n' "$html" | xmllint --html --xpath 'count((//li/article//a/div/img))' - 2>/dev/null)") do title="$(printf '%s\n' "$html" | xmllint --html --xpath "string((//img[@class='video-item--img']) [$i]/@alt)" - 2>/dev/null )" url="$(printf '%s\n' "$html" | xmllint --html --xpath "string((//a[@class='video-item--a']) [$i]/@href)" - 2>/dev/null )" time="$(printf '%s\n' "$html" | xmllint --html --xpath "string((//time[@class='video-item--meta video-item--time']) [$i]/@datetime)" - 2>/dev/null )" printf '%s:\t %s\n' "$(printf '%s\n' "$time" | cut -c-10)" "$title" | tr -cd '.,?!`!@#$%^&*()"-=_+[]{}|:;<>a-zA-Z0-9 \n' done) [[ -z "$list" ]] && { printf '%s\n' "No results found." ; exit 1; } _conv() { printf '%s\n' $(( "$1" + 1 )) ; } for i in $( printf '%s\n' "$list" | fzf -m --bind 'enter:become(printf "%s\\n" {+n})') ; do url_part="$(printf '%s\n' "$html" | \ xmllint --html --xpath "string((//a[@class='video-item--a']) ["$(_conv "$i")"]/@href)" - 2>/dev/null )" printf 'https://rumble.com%s\n' "$url_part" done
-
Chris Hedges: Lynching the Deplorables
“There is little that unites me with those who occupied the Capitol building on Jan. 6. Their vision for America, Christian nationalism, white supremacy, blind support for Trump and embrace of reactionary fact-free conspiracy theories leaves a very wide chasm between their beliefs and mine. But that does not mean I support the judicial lynching against many of those who participated in the Jan. 6 events, a lynching that is mandating years in pretrial detention and prison for misdemeanors. Once rights become privileges, none of us are safe.”
-
Seymour Hersh: How America Took Out The Nord Stream Pipeline
The New York Times called it a “mystery,” but the United States executed a covert sea operation that was kept secret – until now.
https://seymourhersh.substack.com/p/how-america-took-out-the-nord-stream
-
Notice
I have archived my pier and incorporated an affected opposition to Urbit as a core component of my self-marketing strategy for the remaining quarter of 2022.
-
Upgrading Urbit binary from 1.9 to 1.10
Upgrading Urbit on your Linux server has finally become ridiculously easy, with two quick caveats.
An upgrade for the Urbit binary, version 1.10, was released last week.
Version 1.9 was the most recent release before this upgrade, and it’s the one which we documented previously. I wish it were called 1.09, but I haven’t come out as Prince of the Earth yet and for reasons out of our immediate scope won’t be permitted to do so until Charles of Wales is crowned King.
Version 1.9 was in part unique for its incorporation of a simplified upgrade process thanks to a new, built-in upgrade mechanism, so this version marks the first time we get to try it out.
Update Instructions
Quick, clear, and simple instructions for upgrading via the new method have been very nicely documented by the Galactic Tribune. Have a look; it’s an extremely simple, three-step process.
Two one-time caveats to these instructions, however – specific to the update from 1.9 to 1.10 – merit additional documentation.
Pace
The new upgrade mechanism supports release channels (like “Stable” vs “Dev”) called paces, and the version we previously installed happens to have been released on the “wrong” pace.
Before running the commands from the instructions linked immediately above, simply open the file called “pace” inside the hidden subdirectory “.bin” inside your pier folder (the folder designating your planet name on your Linux system). In the pace file, replace the word “once” with “live”.
Alternatively worded, you can execute this in the Linux command line as follows:
echo "live" > /path/to/your-planet/.bin/pace
Binary Location
The new upgrade process prepares your pier for the update and installs the new binary. It does not necessarily, however, replace the specific Linux binary file that you’re used to running. Once you figure this out, you shouldn’t have to deal with it again; the new upgrade process will be almost ridiculously simple and you shouldn’t need anything other than what the Galactic Tribune has provided.
If you followed verbatim the official documentation for your original installation as I had, you are likely used to running a binary file named “urbit” located in the same directory as your pier folder (i.e. parallel to the pier folder, not inside it).
If your upgrade was successful, that file is now outdated and can be deleted or backed up. This will be the same regardless as to whether you ran the “next” command on the original “.run” binary as advised or ran it on your original “urbit” binary. In either case, the latter file is untouched.
As demonstrated by the Galactic Tribune, the “.run” file inside your pier folder is now your new, upgraded binary.
Moving forward, you can simply run Urbit directly from this binary file. In our case, however, we wanted a quick way to retain functionality of our existing environmental scripts. We opted to create a symbolic link named “urbit” in the location of the original binary, pointing to updated one.
If you want to do the same, assuming your original urbit binary and pier folder were located next to each other inside an “urbit” folder in your home directory, you would execute the following:
ln -s ~/urbit/pier-name/.run ~/urbit/urbit
Don’t forget that Linux interprets relative paths literally in symbolic links, so be sure to use absolute paths as I have above.
Conclusion
The new upgrade feature is impressively simple and efficient. If for any reason you need to use the old method, those instructions are included as well.
The pace problem should not come up again for most users. And if you created a symbolic link as we did, you will not need to create it again when the .run binary is upgraded in the future.
Live long and prosper.
-
Installing Urbit on Ubuntu 20.04 LTS with AWS Lightsail
The official guide for installing Urbit on a Linux server is accurate enough and up to date. At the same time, it’s easy for documentation on Urbit at this stage in its development to get deprecated substantially, and running any project like this will involve variation by some combination of preference and environmental necessity.
I recommend following the official guide, to which I defaulted throughout the process, from which I deviated slightly as follows. This is not intended as a replacement for or even supplement to the official guide so much as a place of refuge in case someone runs into trouble.
Identity
Your unique identity within the Urbit namespace is basically a phonetic IP address and is called a planet (with stars and galaxies up the network hierarchy). If you don’t have any connections in the Urbit community, you can start by booting up a comet and then asking around for a planet, or you can just buy one. I’ve skimmed at least a dozen websites that sell them either in USD or ETH, and after recent cost reductions in the way these are generated it’s not uncommon to see them sold for $10 – $30. This is a great way to support someone you like – or you can pay (currently) much less by buying a planet on Azimuth for 0.002 ETH which for me 5 days ago amounted to less than $3 with gas fees.
Server Basics
I’ve had good luck with another VPS provider over the years, but for this project I opted for a vanilla Ubuntu 20.04 LTS server through AWS Lightsail for $5/month – mainly because the first 3 months are free.
Urbit these days is accessed both as a command-line shell called dojo and broadcast through a web interface called landscape. The official documentation by default advises you to run sudo setcap ‘cap_net_bind_service=+ep’ ~/urbit/urbit so that Urbit is prioritized access to the HTTP port 80 as the default web server though I ended up electing to reverse this via sudo setcap -r ~/urbit/urbit so that it runs on 8080 instead. I then ran the former command to the path of my nginx binary instead, and configured Nginx as a proxy server.
Note that Urbit guides which point you in this direction tend to be older, and it’s likely that future versions of the urbit binary will likely require flags like –http-port for such configurations to work consistently.
I’m a hacker, and I learn things my own way; despite the fact that I’ve run various headless web servers, I’ve never really had to think about port access before and I got a better understanding of the basics while trying to diagnose what I initially suspected were improperly configured SSL certificates.
I eventually discovered AWS has their own proprietary network firewall, accessed through the web console and independent of the firewalls running inside the actual Linux system like ufw/iptables. HTTPS (port 443) is disabled by default.
I suppose this might not have been the case had I opted for a VM that had a webserver pre-installed, but the choice was between installing things myself on Ubuntu or loading the RHEL-based “Amazon Linux,” about which I had at least a couple reservations.
For setting up HTTPS with LetsEncrypt, Amazon recommends you use the snap version on their servers rather than python3-certbot-nginx, that you do it while the web server’s systemd service is stopped, and that you then go in and incorporate the public keys into your server configuration manually. Note that Amazon’s exact instructions vary depending on which type of distribution package you have.
I don’t think this is necessary, but in my case I used their method to validate my certificates and copied over pieces from a configuration I had generated with python3-certbot-nginx previously.
Media Management
To facilitate uploading media, Landscape is designed to integrate with a file service Amazon created called S3. The official guide walks you through the process of setting up a S3 “bucket” wherein they recommend purchasing a package from DigitalOcean. Amazon of course offers the service as well, but their documentation suggests their API no longer validates V2 Signatures, which Urbit requires.
Fortunately, the Urbit team accurately outlines how to install and configure MinIO for a self-hosted solution. One thing they don’t spell out for you (unless I missed it) is that you’re creating a docker container for MinIO that is saved and can be run after future reboots via docker start minio-urbit
I deviated from their instructions only in that I used Nginx instead of using Caddy. I had made this decision only while misdiagnosing the problems I mentioned above, so I do not think it should be necessary.
Nginx Configuration
This site was very helpful in translating the recommended setup for Caddy into an identically functional Nginx configuration. The MinIO configuration involves the creation of two proxy servers, into which I integrated a third for Landscape.
One advantage to doing this with Caddy is that it would have automated the validation and installation of the SSL certificates. In the case of other web servers, you just have to set everything up for HTTP and use a version of Certbot that will update the config for you. In the configuration template below, Urbit runs on 8080 (HTTP), MinIO runs on 9000 and 9001, and Nginx receives all three and sends them out on 80 (HTTP) and 443 (HTTPS).
/etc/nginx/sites-enabled/main
server { server_name urbit.yourdomain.org; location / { proxy_set_header Host $host; proxy_set_header Connection ''; proxy_http_version 1.1; proxy_pass http://127.0.0.1:8080; chunked_transfer_encoding off; proxy_buffering off; proxy_cache off; proxy_redirect default; proxy_set_header Forwarded for=$remote_addr; } listen 80; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/yourpath/fullchain.pem; ssl_certificate_key /etc/letsencrypt/yourpath/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # Redirect non-https traffic to https if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot }
/etc/nginx/sites-enabled/bucket
server { # MinIO console server_name console.s3.yourdomain.org; ignore_invalid_headers off; client_max_body_size 0; proxy_buffering off; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_connect_timeout 300; proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://localhost:9001; } listen 80; listen 443 ssl; ssl_certificate /etc/letsencrypt/yourpath/fullchain.pem; ssl_certificate_key /etc/letsencrypt/yourpath/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; } server { # MinIO server server_name s3.yourdomain.org; ignore_invalid_headers off; client_max_body_size 0; proxy_buffering off; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_connect_timeout 300; proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://localhost:9000; } listen 80; listen 443 ssl; ssl_certificate /etc/letsencrypt/yourpath/fullchain.pem; ssl_certificate_key /etc/letsencrypt/yourpath/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; } server { # MinIO server server_name media.s3.yourdomain.org; ignore_invalid_headers off; client_max_body_size 0; proxy_buffering off; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_connect_timeout 300; proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://localhost:9000; } listen 80; listen 443 ssl; ssl_certificate /etc/letsencrypt/yourpath/fullchain.pem; ssl_certificate_key /etc/letsencrypt/yourpath/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; }
Shell Scripts
In order to run Urbit as a background process but also be able to access its shell in the foreground as needed, I downloaded and compiled abduco which is very easy and should have no awkward dependencies. This serves the same purpose as tmux or screen as Urbit recommends except that these latter two are terminal multiplexers and abduco is only a session manager.
I then incorporated abduco into 3 shell scripts in my user’s ~/bin directory (I can’t recall if this is part of $PATH by default) and in which its function should be clarified below. In the words of Dr. Emmett L. Brown, “please excuse the crudity of this model, I didn’t have time to build it to scale.”
~/bin/urbit-boot
#!/bin/sh # If urbit isn’t running, launch an abduco session named “urbit,” in which the command urbit # is run, and for which Ctrl+Q is bound to hide the session. pidof urbit && { printf '\n%s\n\n' "An instance of urbit is already running."; exit 1; } printf '\n%s\n%s\n\n' \ "Launching new detachable Urbit instance via abduco." \ "Press Ctrl+Q to return to shell. Restore with urbit-restore." abduco -e ^q -c urbit $HOME/urbit/urbit -p 34543 $HOME/urbit/sampel-palnet
~/bin/urbit-restore
#!/bin/sh # Find the abduco process with the session name urbit and # re-attach it to the current command-line. If urbit isn’t running, return an error. printf '\n%s\n%s\n\n' \ "Restoring existing Urbit instance via abduco." \ "Press Ctrl+Q to return to shell. Restore \ with urbit-restore." abduco -e ^q -a urbit || { pidof urbit && { printf '\n%s\n\n' "Urbit is already running without an attachable session." ; exit 0; } #must return successful exit code, 0; error means urbit needs to be run! printf '\n%s\n\n' "Urbit is not running." exit 1 }
Finally, this third script really could just be a function:
~/bin/urbit
#!/bin/sh # If urbit-restore returns an error, make sure urbit really isn’t # running, and then run urbit-boot. urbit-restore || pidof urbit || urbit-boot
Creating a Systemd Service
Finally, a stable web server should be able to survive reboots without manual intervention. In other words we need to automate the launching of the MinIO docker container as well as the urbit server. There are different ways of doing this, but here’s what I did.
/opt/root-launches-users-urbit
#!/bin/sh # Root automatically logs user in, runs usual start script as the # user, then starts docker #launch urbit for user as reattachable abduco session runuser -l myuser -c "/home/myuser/bin/urbit-boot" >/dev/null 2>&1 & #launch minio for connected media bucket docker start minio-urbit
/etc/systemd/system/urbit-daemon.service
[Unit] Description=root starts urbit and media bucket for user at system boot [Service] ExecStart=/opt/root-launches-users-urbit [Install] WantedBy=multi-user.target
-
Copy awesomewm notifications to the clipboard
function copy_naughty() -- Copy naughty notification(s) to clipboard in v4.3; API changes in -- later versions should simplify. Multiple notifications are outputed -- to a table, then concatenated as one string sent to xclip. local cs -- "combined string" local output = {} for s in pairs(naughty.notifications) do for p in pairs(naughty.notifications[s]) do local ntfs = naughty.notifications[s][p] for i, notify in pairs(ntfs) do table.insert(output, notify.textbox.text) end end end if output[1] == nil then return nil end local lb = "\n" ; for i = 1, #output do if cs == nil then cs = output[i] else cs = cs .. lb .. output[i] end end io.popen('xclip -selection clipboard','w'):write(cs):close() naughty.notify({ position = "bottom_middle", timeout = 1, text = "Copied" }) end