Birch Street Computing -

about me

John M is a Linux fan in Lowell, MA.

I work at at a company writing software. I fool around with Free and Open Source software for fun & profit.

I was big into Last.fm and you can still see some of what I listen to there. I can also be found using github and recently sourcehut as well as bitbucket (historically). I don't care for most popular social media sites. If I have an account on one or the other it's probably either old and unused or was created just for tinkering.

promo

Links to things I like, use, or otherwise feel is worth sharing. Things I'd like to see get more popular.

John's Blog & More

A perpetual work in progress. One that has been rarely touched as of late. This applies to both the content and the software to publish it.

Of some historical interest may be the Python Taglib Tutorial, or my old Kubuntu Wireless page.

Samba in Kubernetes - Status Update 2

I had hoped to update the wider Samba community with another status report in December but I missed that boat. So January will have to do. This message is part of an ongoing effort to summarize what we've been up to as we work on integration for Samba in containers and Kubernetes 1.

As a reminder: our focus is to enable Samba based services running within Kubernetes clusters, however our container work should be completely independent of the orchestration layer, so you can use docker, podman, or other OCI container based orchestration systems.

Clustering/CTDB

We have continued working on making clustered smbd instances with CTDB a viable option for users. The low level work has not been changing a lot recently, and we've focused on improving the operator and how we create and manage clustered instances. The feature is still experimental but the workflow should not be changing much in the near future. Largely, you just need to create "SmbShare" resources that indicate they should be clustered and the minimum size of the cluster. We've improved our testing coverage but need to improve our infrastructure before we can stabilize the feature. We also have some plans to revisit how we configure the CTDB cluster as the nodes file is a bit of a challenge.

Like I mentioned in my previous message, we want to look into improving behavior with regards to node and container failover. We have not been able to spend much time on this yet, so we are unclear if we can combine CTDB's native IP failover with Kubernetes networking.

We're nearly done adding support for the vfs fileid module to the operator. Sachin Prabhu has a PR open on this topic 2. This change will ensure that the file system we're targeting (cephfs) will not depend on external factors like what order file systems were mounted by the kernel. For now, this is always enabled but we can make it configurable in the future.

ACL Xattr

We still want to run our containers without privileges and therefore being able to store NTACLs outside of "security.NTACL" continues to be a goal. In order to get this functionality, Günther Deschner is continuing work on the open Samba project merge request 3. Günther is working to improve the hooks into the VFS layer to handle performance and layering concerns raised in that PR.

CI and Testing Infrastructure

Currently, all our projects rely entirely on the github actions CI. However, we've hit some limitations with this infrastructure, especially with the ability to run integration tests on multi-node clusters for CTDB Clustered instances. Anoop C S has been working on arranging a new testing infrastructure using the CentOS CI 4. This system will allow us to run VMs in our tests and support virtual multi-node clusters. In addition to setting up this infrastructure for our Samba-in-Containers work, the plan is to also use this for the gluster/samba integration tests, and perhaps other samba integration tests in the future.

AD DC Containers

The samba-containers project generates images for client, server, and AD (DC) servers. However, the AD DC server images today produce containers that can only act as a single DC in a hard-coded domain with hard-coded users and groups. This has been working fine for our team for a while because our needs for the Samba AD is limited: we use it as part of our integration tests and not much else. As part of a general effort to make the samba-containers project more generally useful, I spent some time over the holidays working on making the AD DC container image work with custom settings 5. The new image will be based on the sambacc project, just like the file server image has been for a while. Soon, the image will be configurable, support provisioning a new domain, as well as joining a new DC to an existing domain.

Running an AD DC container continues to require executing the container with SYS_ADMIN capabilities.

Wrap Up

Work continues on many of the projects living under the samba-in-kubernetes umbrella. We're hoping that these (semi-)regular updates help create some additional interest in these efforts. Feel free to reply with questions/comments/concerns. We'd also love it if you drop by our github projects as well. Even feature requests are welcome. :-)

Thanks for reading!

PS. This is a reformatted version of what I sent to the samba mailing lists 6. I'm "blogging" these for easy reference and discoverability.

Samba in Kubernetes - Status Update 1

As some of you may remember, I've been working on an effort to include SMB, via Samba, in the container ecosystem 1 and Kubernetes. It's been a while since I wrote anything about these projects. I want to give an update on our work to keep others informed, and possibly get others interested in what we've been doing. I hope to make this a regular thing (unless you get annoyed and tell me to stop ;-) ).

We gave a presentation on these projects at sambaXP 2 and also have had a few posts on the samba mailing list about them. Since then we've been doing a lot of varied work and below is a small slice of it.

CTDB

I took on the task of trying to "containerize" CTDB-enabled Samba over this summer. With assistance of the samba-technical mailing list we successfully started running CTDB in a containerized environment. I proceeded to automate parts of the CTDB configuration in our sambacc library 3. Finally, we created new example YAML files in samba-containers 4 that demonstrate a clustered instance of Samba running under Kubernetes orchestration.

There were a lot of challenges getting CTDB working automatically in a container and there's still a lot to do. For example, the code I wrote to help manage the CTDB nodes file isn't as robust as I'd like it to be and is still generally immature. We'd like to discuss ways we might update CTDB to make the nodes easier to manage through automation (in our environment) and hope such discussions could improve things for other use cases as well.

At the moment, we've been using a Kubernetes based mechanism 5 for getting clients running outside of the cluster access to the shares. Without plunging deep into the particulars of Kubernetes networking; the method works well enough but does not use the CTDB public address mechanism. As such, we feel the failover behavior we have right now may not be as nice at getting clients to quickly fail-over when compared to CTDB's tickle-ack. This is an area we're going to be actively investigating and would like to hear additional feedback on this topic.

We're also adding support for the CTDB enabled Samba instances to the Kubernetes operator code. The plan is to continue to use the single SMB instance by default but add an experimental mode for running a StatefulSet of Pods that include CTDB for clustering across multiple nodes.

We are also discussing the need for using the fileid vfs module on top the clustered file system we intend to use (cephfs). However, we don't want to require a PVC to be any particular file system and so we have been discussing ways to enable the fileid module, and how to configure it, only in cases that we need it in an easy-to-use manner.

ACL Xattr

We are very interested in being able to use NTACLs on the shares that we host in containerized Samba servers. Today, the acl_xattr module stores the NTACL metadata in an xattr named "security.NTACL". The "security" namespaces on Linux requires the use of the CAP_SYS_ADMIN capability - basically the "root user" of Linux capabilities. This would require running the containers with escalated privileges; something we would prefer not to do. So, Günther Deschner, with feedback from Ralph Böhme and Michael Adam and others, has been working on the patches in MR#1908 6.

With these changes in place, we'd be able to configure Samba with a custom xattr for our containerized Samba instances and thus avoid needing to run a privileged container while still supporting NTACLs.

Nightly Builds

We've been looking in an easy way to try out the latest code in Samba and are planning on adding support for building container images based on nightly package builds. Anoop C S has already added support to the sambacc layer in order to support the newer CLI options present in Samba 4.15. Next we plan to add new build rules to the samba-containers project that will create new build tags in our container registry repo(s). Soon we should be able to more easily test and experiment with the latest versions Samba has to offer!

Metrics

In the slightly longer term, we would like to add metrics support to our pods running in Kubernetes. Our plan is to follow the de-facto standard in this space and export Prometheus metrics. In order to do this we need to get the data out of our Samba stack and rather than try and directly scrape the output of a tool like smbstatus we are very interested in the effort to add JSON output support to samba tools. In our case, we want JSON output from smbstatus. As such we're very interested in MR#1269 7. We'll probably be getting more involved here in the near future.

Offline Domain Join

One new feature that landed in Samba 4.15 is the new Offline Domain Join (ODJ) support. Currently, to connect our containers with Active Directory we're asking users to store a username/password in a Kubernetes secret 8. We are aware that storing a password (even within a secret) isn't the best thing to do, so we're looking forward to trying to integrate the ODJ feature into our projects so that users never have to store a password. Watch this space! :-)

Wrap Up

If you have any questions or comments feel free to ask! I'm hoping this update both serves to inform you of what we're up to as well as serving as a prompt for you to give us feedback.

Thanks for reading!

1Specifically OCI/Docker style containers.
8Not strictly true, as you can leave out the secret and then run a command in the container to complete the join. However, this runs counter to the level of automation we'd like to provide, but is an option.

Hello Operator

Recently, I have been working * on a new project. As the title indicates, this is an operator. Specifically, this is an operator for managing Samba. If you are not familiar with operators, the idea is basically software that runs in Kubernetes that in turn manages Kubernetes resources, typically taking custom resource (CR) types as input. Samba, is of course, the well known Open Source implementation of SMB protocols and the software to serve, manage, interoperate with SMB, Active Directory, and other Windows oriented things.

The project was started by Michael Adam, a Samba team member, and I started helping out around October last year. It was slow going at first, but we've picked up steam recently and is becoming one of my biggest areas of focus.

I hope to write more about this subject, so I am not planning to be very comprehensive in this post. I do want to make few links and make a few remarks on what I have been doing in these projects.

First, we've got three code repos that make up most of the unique code and files:

The Samba Operator repo contains the code for the operator itself. This is a Go language operator. It defines Custom Resources, via Kubernetes Custom Resource Definitions (CRDs), implements the "business logic" of interpreting those resources and turning them into Pods and other things that produce Shares for people to use. I want to come back to the subject of the design behind the CRs and the recent work we've put into that area in a subsequent post.

The Samba Container repo contains files that help build "generic" samba containers - basically OCI/Docker-style images containing various samba components. Currently, it produces images for a client (smbclient), a server (file-server: smbd, winbind, etc.), and work is in progress to add a Samba AD image. These are meant to work with the operator but are not specific to it. These can work locally with podman or docker or in any other compatible container system.

Because the file server has many interactions with the system it runs on, and there are various tools to configure aspects of both smbd and winbind, I started the sambacc project to try to create a layer that eases the user-facing steps needed to get the samba subsystems working in the container environment. It is a Python library and a command line tool. The CLI is somewhat unimaginatively called "samba-container". The sambacc package is meant to act as a "glue" layer between samba and the container environment. It also aims to provide a single, declarative, configuration that is (relatively) easy for both humans and code to work with.

I started sambacc late last year and after a few weeks of intense development, and some subsequent discussion with Michael Adam, we made the server image built by Samba Container based on sambacc. Currently, sambacc can configure samba (via smb.conf style parameters in the internal registry), it can set up users and groups (locally to the running container), it helps bring up the smbd and winbdind processes, and it has support for assisting with AD domain join.

Development on sambacc itself has slowed down a bit in the past few weeks as it is largely meeting my needs for enabling development on the operator itself. However, there's still lots and lots to do in all of these areas. I expect to do a bit of "see-sawing" as certain areas of the code mature and then I move onto other topics and find a need that would be better served by changes or additions in sambacc rather than the operator code itself. For example, I expect to soon add subcommands to samba-container to help create liveness and readiness checks for Kubernetes pods.

I'll close this out by suggesting anyone interested in these topics feel free to join in the conversation by making use of the github repos and post issues or use the discussions feature, or whatever floats your boat. :-) I hope this was a useful summary of the three main areas that make up our recent efforts - with a little emphasis on sambacc, because its very new, probably a bit less self explanatory and a project I started. :-)

*Standard disclaimer that while this is something I work on as part of $DAYJOB all the opinions here are my own and so forth.

Some Books of 2020

I wanted to keep a bit of a record of some books I read in 2020. This isn't a complete list. This only covers books I completed and only those that were a first read. I didn't keep track over the course of the year, I just looked over what's on the shelf and I clearly remember completing it.

This isn't a brag list or anything, but I am happy with the fact that I read much more long-form books this year than the past few.

  • A Think Tank for Liberty - Poole

  • Bad Blood - Carreyrou

  • Bottleneckers - Mellor, Carpenter

  • Grant - Chernow

  • How Innovation Works - Ridley

  • How to Have Impossible Conversations - Boghossian, Lindsay

  • John Goblikon's Guide to Living Your Best Life - Goblikon, Dermer, Rispoli

  • Lord of Emperors - Kay

  • Lost in Math - Hossenfelder

  • Seeing Like a State - Scott

  • Skin in the Game - Taleb

  • The Book of Legendary Lands - Eco

  • The Founder's Mentality - Zook, Allen

  • The Guarded Gate - Okrent

  • The Infidel and the Professor - Rasmussen

  • The Storm Before The Storm - Duncan

  • Thinking as a Science - Hazlitt

  • What If? / How To / Thing Explainer - Munroe

Disabling systemd-resolved on Fedora CoreOS 33

I was recently working on a side-task for $DAYJOB and decided, rightly or wrongly, that I needed to run a container image that hosts a DNS server, among other components, on a FCOS (Fedora CoreOS) VM image. I chose to use FCOS 33 because Fedora 33 has recently been released and I wanted my stuff to be a bit more forward looking than I usually do.

However, I ran into difficulties due to systemd-resolved listening on port 53. To turn off systemd-resolved completely I needed to do a few things that I've captured in the FCC sample below. If you want to do something similar, you can add the relevant parts of this example FCC to yours. Beneath the sample, I will touch on what the various subsections accomplished.

variant: fcos
version: 1.1.0
# (skipping passwd section)
storage:
  files:
    # Set network manager to use default dns
    - path: /etc/NetworkManager/NetworkManager.conf
      overwrite: true
      contents:
        inline: |
          [main]
          dns=default

          [logging]
      mode: 0644
      user:
        id: 0
      group:
        id: 0
    # Ensure resolv.conf is a real file, not a symlink
    - path: /etc/resolv.conf
      overwrite: true
      contents:
        inline: ""
      mode: 0644
      user:
        id: 0
      group:
        id: 0
systemd:
  units:
    - name: coreos-migrate-to-systemd-resolved.service
      enabled: false
      mask: true
    - name: systemd-resolved.service
      enabled: false
      mask: true

The simplest steps involve stopping the systemd-resolved and a "helper" migration service that is FCOS specific. That got me to the point where port 53 was not longer being used by anything else.

After that it was a matter of getting "normal" DNS working again. Configuring Network Manager should have been enough according to the Network Manager docs I read, but because /etc/resolv.conf was already a symlink it was apparently unable to write the typical contents to the file and silently (?) failing. To work around this, I determined that adding an empty file to the storage/files section of the FCC was sufficient to get Network Manager writing resolv.conf in the ye-olde way and I could move on with my life.

Perhaps there are better ways to handle this situation than turning off systemd-resolved to get the port free. But this is the route I chose. Feel free to shoot me an email if you think there's a better approach.

How I work on go-ceph

Or, how I go-ceph myself.

This write-up is something I promised a coworker and I thought that I might as well make it public. I didn't make it part of go-ceph's documentation or anything because I feel this is a little to specific to how I work. I do hope that anyone reading this can make use of parts of this, or be inspired by it, or perhaps just laugh at it. I just don't want to put it on go-ceph and make it sound like this is the way you are supposed to work on go-ceph.

First off, I tend toward a somewhat minimal setup. For most projects I use vim with very few plugins and mostly just some vimscript stuff I hacked together myself. Vim is my editor and bash is my IDE. :-) So I don't do much more fancy than using ctags and grep for code navigation.

On the average go-ceph PR we're adding either one or some small number of function calls to go-ceph. I will often open my local copy of the ceph library headers for reference. We have a project standard to copy the C function declaration under a "Implements:" line in our doc comments. So, keeping the file open makes it easy to copy that over too.

The more interesting parts are the build and test workflow. The repo includes a Makefile that can build and run test containers. These containers are used by our CI but are pretty easily to run locally. The makefile will automatically select between podman or docker CLI commands. I prefer podman of course. The make ci-image command will create a new container image, based on a ceph-containers image. You can choose what version of ceph by setting CEPH_VERSION to something like "nautilus" or "octopus". I made it possible to use different images in parallel. This didn't matter in our CI but is helpful when running locally.

Now, if you want to run the entire test suite you can run make test-container and run all the test suites for rados, rbd, and cephfs, as well as our internal helper packages. It also runs a minimalistic ceph cluster before executing the tests. This is convenient as you can basically do exactly as the CI locally, but it's a bit slow if you're iterating on a certain subset of things.

I've adapted my workflow to do a customized version of the command run by make test-container. This is something I started off doing by hand, and then made a simple shell script, and eventually a more complex tool in Python. That's a pretty normal progression for me. I won't share the script here because it's pretty me-specific but I will talk a bit about how it works. Effectively, it just runs a tweaked version of the docker/podman command seen in the makefile with a few volume mount options (-v) that I want. In particular I make the results directory, which includes the coverage report and some of the ceph cluster config. An example follows:

podman run --device /dev/fuse \
  --cap-add SYS_ADMIN \
  --cap-add SYS_PTRACE \
  --security-opt=label=disable \
  --rm \
  -it \
  --network=host \
  --name=go-ceph-ci \
  -v /home/jmulliga/devel/go-ceph:/go/src/github.com/ceph/go-ceph \
  -v /home/jmulliga/devel/go-ceph/_results/ceph:/tmp/ceph \
  -v /home/jmulliga/devel/go-ceph/_results:/results \
  -e GOPROXY=http://localhost:8081 \
  go-ceph-ci:octopus

The coverage report is useful because I like to view the html coverage report to make sure all the non-error conditions are tested and all of the testable error conditions are too. I'll go into how I make use of the ceph config shortly.

The container has an entrypoint.sh script that takes a number of useful options. Currently, the majority of them are about what tests get run. I won't go over every single one. The script has a working (last time I checked) --help option. I call my wrapper script with additional args that are passed on to the container, such as --test-pkg=cephfs. This causes the entrypoint.sh script to only run tests for the cephfs subpackage. If my work is only touching cephfs code then this makes the overall job faster by only testing the relevant package. There's also the --test-run=VALUE option which is passed along to the go test command's -run option. Using this option can now reduce the number of tests to a specific subset. For the vast majority of cases I use --test-pkg with a fair portion also using --test-run. I do generally run at least once without --test-run before pushing the code to create a PR. That's also often the step where I will double check the coverage report and eyeball the coverage percentages and skim over my new or changed functions.

Despite the above working for many cases, let's say 90%, there are times when I want to go even faster or I need to debug things and using the container to run the tests is more hassle than its worth. In these cases I run the container with the options --test-run=NONE --pause. I run this container in the background. This will cause the tiny ceph cluster to get set up, but it will skip running tests, and then the script will just sleep forever. Once I have my little ceph environment going I can start testing stuff against this cluster from outside the container. This is why I have the ceph config dir in a shared directory. I can now set the environment variable in my shell to export CEPH_CONF=$PWD/_results/ceph/ceph.conf and then run the tests that make use of the ceph cluster using the standard go tooling, such as go test -v ./rados. Now, I don't need to wait for ceph to start every time I want to execute my test(s).

One word of warning, if you do want to try this at home. Not all our tests are clean and properly idempotent. I certainly want that to be true, but there are times where I hit a condition that leaves state behind on the cluster which might interfere to later test runs. Caveat emptor, patches welcome, and all that. :-)

This is getting longer than I expected it would so I want to wrap up, but I will mention one more thing I've found handly lately. Sometimes if I can't figure out what's going on with something in the Ceph code itself, say with an enexpected test failure, I want to enable debug logging in the ceph code. For this, I usually combine the above techinque of starting Ceph in the container and then I'll edit the shared ceph.conf file. To the [global] section I'll add stuff like:

log to stderr = true
err to stderr = true
debug client = 20
debug mgrc = 20

I refer to the Logging and Debugging page in the ceph docs to help me pick what subsystems to enable debug for.

It's turned into a bit of a long-winded but general overview of a few techniques I use when working on the go-ceph project. I hope it was interesting.

Nine Years

Well, it has been (approximately) nine years since I last added content to this site. I recently spiffied it up a little bit and removed some of the more unfashionable elements of the site design. I left most of it the same however, as the longer I looked at it the happier I was with it. It's certainly old-fashioned, in the mid-2000s sense of old fashioned, but it's my kind of old-fashioned.

The homegrown static site generator I use for it has been improved as well. Most of the time spent on it was just getting it working on Python 3. Very little else about it needed to change, because it does rather little.

I certainly do not promise much additional activity here but I have been hoping get my thoughts down in writing more often. I spend a lot more time communicating online rather than in person that I once did, that motivates me a bit more to try and keep some of my more common thoughts in one place that I can refer to later. This site could be that place.

I thought I might amuse myself a bit by creating a list of things I would have written about in the intervening years. However, this list would not make any sense to any poor person who stumbles across this site and so I will refrain from making that list part of a post. I might create some sort of list-of-ideas but I'll keep it private.

My 2011 in Music

For me 2011 was a pretty darn good year for music. I regularly listened to a lot more new music than last year. I also sampled more stuff after reading a few more music blogs this year. I especially appreciated learning about bandcamp.com and listing to albums from that site. I figure it influenced a couple of real purchases this year.

Biggest 2010 Albums of 2011

These are albums/artists that I listened to a lot in 2011, even though they weren't new this year. Some of them were new to me and others simply kept me coming back to them.

  • Borknagar - Universal: I still listen to this album all the time. Simply fantastic.

  • Enslaved - Axioma Ethica Odini: This is also a insanely good album. Even at work I've named some of my test data after the track names here.

  • Ghost - Opus Eponymous: If evil can be fun and poppy, this album is exhibit A. Every time I see someone diss this album online I want to shake them and tell them to stop overthinking it and just enjoy.

  • Julie Christmas - The Bad Wife: I simply love her voice. I hope Made out of Babies makes another album soon, but this solo album will tide me over in the meantime.

  • Melechesh - The Epigenesis: I simply love the middle eastern + metal sound these guys put together. I've used it to wake myself up on quite a few morning drives.

  • Shining - Blackjazz: This album is nuts, in the best way possible.

Standout New Music of 2011

The Good Ones

These albums were pretty good and I listened to them quite a bit.

  • The Sword - Warp Riders: Loved the title track. I preferred what these guys were doing on the first two albums, but still a very enjoyable journey though space.

  • Novembers Doom - Aphotic: Another solid album from this band.

  • Septicflesh - The Great Mass: Got this one pretty late in the year for it to really sink in on me, but it's kind of been a grower. The orchestral bits are worked in well with the metal side of things. I want to listen to some of these guy's older releases now.

  • Anaal Nathrakh - Passion: Got this one at the same time as Septicflesh. I had this playing when I was stuck in traffic one day and I think it made my crazy (more crazy?). Now sometimes I listen to it when I code in order to feel more evil. :-D

The Great Ones

These albums were really awesome and I listened to these albums a ton. They got the most use in my car so my last.fm stats really don't represent how often I listened to these.

  • ICS Vortex - Storm Seeker: I've really grown to like Vortex's vocals after listening to Borknagar so many times. I listened to the opening track, The Blackmobile, and knew I had to buy the album. While the remaining tracks differ quite a bit from Blackmobile I still think it's great. The whole album is good progressive metal, with what sounds to me a Yes-like feel.

  • Obscura - Omnivium: Kicked my ass. This is simply an amazing album there were a few things I liked more on the previous album, mainly the vocals, but well worth a listen. The cover is an illustration from a book on the mating process of Metroids.

  • Kvelertak - Kvelertak: I'm cheating a little here because only the US release was in 2011. This album is hugely energetic and I liked to listen to it on the treadmill!

  • Woods of Ypres - Woods IV (The Green Album): Can being sad be fun? That's how I feel about this music. Quite often I'd find my self singing or shouting along with the lyrics to this in the car. What's really sad is the recent death of the bands frontman (RIP).

  • Red Fang - Murder the Mountains: This band is hilarious, awesome, and makes fantastic music videos. Of course, the album itself rocks too. My favorite track is Throw Up, which sounds pretty Melvins-ish to me. I expect this band to get more popular soon.

My penultimate album of 2011

Mastodon/The Hunter Album Cover

Mastodon - The Hunter: Mastodon did a great job building up anticipation of this album in my opinion. They released full song previews on youtube before the official release date. Like some I was unsure about some of the differences between what they're doing here versus Crack the Skye. But really, the album's a grower. I mean it's a huge grower. The first time I listened to 'Creature Lives' I wasn't sure about it. But now, and even more after attending the live show, I love it. I sing along to it. My favorite track is Spectrelight, but the whole last few tracks of the album are just tremendous.

My favorite album of 2011

Subrosa/No Help for the Mighty Ones Album Cover

Subrosa - No Help for the Mighty Ones: If you're wondering what album I could have liked more than The Hunter (maybe you're not) wait no longer. No Help for the Mighty Ones is a deep, beautiful and haunting album. I haven't read or watched some of the sources they used as background material, but I'm sure it could only help me appreciate it more. It's simply so heavy and sad it wraps around to wonderful. I'm not much of a reviewer and that's not what I was planning on doing here, suffice to say it's a great piece of art. My personal favorite track is Whippoorwill, but really, there's nothing bad here.

Recent Acquisitions

I got some CDs (real, physical CDs, yay!) this Christmas from my awesome family. I also got some Amazon points that I've already use for a few mp3 albums. These are albums that I haven't got a chance to listen to much yet but I expect I'll be queuing up to play pretty often.

  • Absu - Abzu: Love the opening scream. I've already listened to this a couple of times from the bandcamp page pre purchase. I've got to check out the previous album, Absu (oh-you-guys-are-clever) now.

  • Cave In - White Silence: I was a big fan of Zozobra when these guys were apart. So far I really like what's happening on this album as well.

  • Glorior Belli - The Great Southern Darkness: The title track is seriously badass.

  • Revocation - Chaos of Forms: Love the previous album and expect this one to rock as well.

  • Unexpect - Fables of the Sleepless Empire: I've been waiting for this album for what feels like forever. I'm simply giddy.

  • Yob - Atma: Doom. Doom. DOOM!

Please Release

One nice thing about using Python is the simple and easy way to get libraries and other packages quickly and easily using tools like pypi and pip. However, when deploying I prefer OS packages (typically RPMs since I've worked at RHEL/Fedora using shops). Sometimes the libraries I need are not packaged by upstream and I end up building them myself.

None of that bugs me. What I what I wish was that more OSS projects, especially some of the smaller libs, would actually release occasionally. Instead I have to get a copy out of source control, which often has no release information. This makes me queasy because I have a hard time telling if the most recent versions are stable. Even if you don't want to make tarballs, at least periodically tag your releases in a way that makes us downstream users feel some confidence that your code isn't halfway in the middle of a refactoring or something.

If it's a "mature" code base, one that changes rarely, stick a release tag on there once a year or something. I'm going to end up testing this stuff for my own purposes, but a release tag at least makes me feel that I'm not going to end up wasting too much of my time on software left in the lurch. So far some of the worst offenders are on the dev sites like bitbucket and github. I don't know if my sample size is just too small or I have bad luck, but I have a guess that in the ye olden days people had to create a tarball just to get hold of the code, but these newer sites/tools make it easy for people to skip that step, and then forget completely about doing a release at all.

I don't actually think anyone is going to listen to me, and I'm going to have to keep building packages that have strings like "20101114git83848383f3892" or "20110109hgf110ac096f19" in them. But hope springs eternal :-).

Dell Vostro 3300 and Fedora 13

I recently got myself a Dell Vostro 3300 and am in the process of installing Linux, Fedora 13 to be specific, on it. I started out using the KDE Spin LiveCD and resizing the Windows 7 partition down to about a third of its orignal size. This was a little tricky because the Fedora automated installer saw the partition and recognized it as NTFS, but reported the size a 0 Bytes. This prevented me from running the resize tool in the Fedora installer, so I did it via the command line. I used ntfsresize(8) to shrink it, and then used manual partitioning in the Fedora installer to shrink the partition. I gave myself some wiggle room by making the partition a tad bit bigger than the size I shrunk the filesystem to.

A lot of the hardware was working right out of the box, wireless networking was the big exception. I knew the machine would come with Broadcom wireless beforehand so this was expected. After plugging into the wired lan and running the update tool I added the rpmfusion repos, and thanks to the hints on a helpful blog post I knew to install the kmod-wl package.

There is still more stuff to try out and I plan on writing about this machine a little bit in future entries.

Every blog page or article on this site is available under the CC-BY-SA license unless otherwise noted.