Saving my Wrists, Part 1: moving to an ergonomic mouse

I’m full aware of my bad desk work ergonomics. I’m pretty good at sitting position – back straight, lumbar pillow, lap sloping forward, using my standing desk whenever the cat steals my chair, etc etc etc – but I use a normal gamer keyboard and mouse in a cramped position.

Late last year I started getting a bit of a twinge in my wrist, which reminded me of the time I properly flaunted with carpel tunnel at my first job doing an extensive intensive coding project on an ultra-small laptop. And with the upcoming 2024 refresh of the company’s wellness expense budget, I decided to start looking into more ergonomic pointing device options.

My first avenue of exploration was trackballs, but despite the existence of the GameBall, there is realistically no way to use a trackball for both office work and gaming. That was a core requirement for me – I didn’t just want a device sold by an ergonomic office company for being a happy little worker bee, but something which would minimize compromises for long sessions of PC gaming too. And *gaming* ergonomic input devices are a rarity.

What distinguishes a mouse as for-gaming vs not-for-gaming? It’s not just aesthetics like rainbow puke lighting (though that’s not NOT a component), but also increased polling rate and DPI sensitivity. As a random example, the highest end Logitech office mouse – the $99 MX Master 3S – has a 125 Hz polling rate and 8,000 DPI. Logitech Gaming’s cheapest mouse – the $29 G203 – has a 1,000 Hz polling rate, and the $59 G305 gives you a 12,000 DPI sensor. High end devices are over 25,000 DPI! And whilst I don’t actually need an insanely high DPI, 125Hz is entirely unfit for use (it’s lower than my screen’s 144 Hz refresh rate, so I can absolutely feel the mouse jerking around the place).

I ended up with something I found on Amazon: the Delux Seeker M618XSD (from dodgy Chinese brand Delux), for $70. It’s been an interesting learning experience – not wholly positive, but not bad enough for me to abandon the exercise. Some of the issues I very much expected going in – for example, the software is garbage, the battery life is not as good as the Logitech G903 I was previously using, and the battery life meter on the OLED display is all over the place (100%->84% takes less than an hour, but 84%->79% takes more than a day).

Delux Seeker mouse, right hand (button) side

But it’s been a fun learning experience regardless! The first decision was the wrist rest. The Seeker comes with a magnetically attachable wrist rest, which I tried for a day or two, but eventually had to write off. The core problem is with the wrist rest attached, my grip position is far forward on the mouse, with the back of it fully into my palm, and all accuracy in movement coming from moving my whole forearm – either swinging it left and right for horizontality and sliding my arm on my armrest for verticality. It’s sorta okay, if a bit tiring, but it entirely falls down when it comes to moving the mouse – if I hit the end of my mouse mat, I can’t just pick it up to move it as the shape of the mouse with the wrist rest attached simply doesn’t allow for picking it up without squeezing at least one mouse button. And my DPI accuracy when swinging my whole arm around is very very low compared to a traditional mouse.

Hand fully pushed forward on wrist rest

Instead, without the wrist rest, my whole position changes – my little finger hooks under the right edge where the wrist rest would attach (making it easier to pick up), my hand rests on the desk about an inch further back from the mouse, and a lot of the fine movement instead comes from my fingertips altering the mouse position instead of my arm. Sadly without the wrist rest, the horizontal scroll wheel is realistically out of reach (although I configured this as a duplicate vertical scroll wheel because vertical scroll was uncomfortable with the wrist rest – the end result is kinda neutral)

Hand without wrist rest, resting on desk and using fingertips at more of a distance

I ended up on a 1,000 Hz, 2,400 DPI configuration as the “sweet spot” without the wrist rest, and fingers pointing down slightly (index finger resting on the mouse wheel when I’m not actively clicking things, middle finger on the right button, and ring finger with the little finger under the wrist rest attachment lip.

The next thing I tried was some low-friction “mouse skates“, as I found it took an unpleasant level of force to get the mouse moving. This was a mistake – the end result was too low-friction! I would end up accidentally dragging slightly whilst attempting to click, as the mouse’s friction was so low I would be able to move it by sneezing at it. In the end I fixed the friction issue another way: I figured out I’m a huge idiot, and peeled off the white plastic peel from the feet on the mouse, instead of assuming it came with white feet. Oops! The default feet aren’t too low friction, or too high friction, so accidental dragging is no longer a problem whilst it’s not too much effort to start moving from still.

Oh and the garbage software I mentioned? Part of that is the lighting basically doesn’t work. The docs say the mouse should have palm sensing and turn off the light when using it, but that isn’t the case – the light is either on all the time (which needs manually turning on every boot with a button combo) or off all the time-ish (it’ll sometimes just blink randomly once in awhile). And the software which is supposed to configure the lighting doesn’t seem to do much (it DOES work for changing the polling rate and the button assignment though, so it’s not totally useless).

I tried doing some aim testing with software like Aimlabs to see if I could measure my accuracy (and improvements), but frankly found the software too hard to drive to get meaningful results. But it seems to be doing okay for general use in games like Satisfactory or Baldur’s Gate 3. I don’t really play competitive shooters (at most PvE games like Borderlands with my wife) so not a huge deal.

I don’t think I’m necessarily “done” with finding the perfect ergonomic mouse for gaming (I’m curious about the Titanwolf Megalith), but I’m feeling pretty good about the experiment! Next step: keyboard changes.

P.s. A shower thought: what if an ergo mouse combined both mousiness with a small thumb trackball, so you could increase accuracy by simultaneously using two inputs? Popular Nintendo shooter Splatoon does this, with both gyroscope and joystick aim combined (one for big movement and one for last-few-pixel dialling in aim)

Building a NAS

The status quo

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo
QNAP TS-453mini product photo

That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

Read more…

Retirement

Apparently it’s nearly four years since I last posted to my blog. Which is, to a degree, the point here. My time, and priorities, have changed over the years. And this lead me to the decision that my available time and priorities in 2023 aren’t compatible with being a Debian or Ubuntu developer, and realistically, haven’t been for years. As of earlier this month, I quit as a Debian Developer and Ubuntu MOTU.

I think a lot of my blogging energy got absorbed by social media over the last decade, but with the collapse of Twitter and Reddit due to mismanagement, I’m trying to allocate more time for blog-based things instead. I may write up some of the things I’ve achieved at work (.NET 8 is now snapped for release Soon). I might even blog about work-adjacent controversial topics, like my changed feelings about the entire concept of distribution packages. But there’s time for that later. Maybe.

I’ll keep tagging vaguely FOSS related topics with the Debian and Ubuntu tags, which cause them to be aggregated in the Planet Debian/Ubuntu feeds (RSS, remember that from the before times?!) until an admin on those sites gets annoyed at the off-topic posting of an emeritus dev and deletes them.

But that’s where we are. Rather than ignore my distro obligations, I’ve admitted that I just don’t have the energy any more. Let someone less perpetually exhausted than me take over. And if they don’t, maybe that’s OK too.

My name is Jo and this is home now

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.

Acknowledgements

Before anything else, I want to credit those who made it possible to reach this point. My then-manager Duncan Mak, his manager Miguel de Icaza. Amy and Alex from work for bailing us out of a pickle. Microsoft’s immigration office/HR. Gigi, the “destination services consultant” from DwellWorks. The immigration attorneys at Fragomen LLP. Lynn Community Health Center. And my family, for their unwavering support.

The kick-off

It all began in July 2016. With support from my management chain, I went through the process of applying for an L-1 intracompany transferee visa – a 3-5 year dual-intent visa, essentially a time-limited secondment from Microsoft UK to Microsoft Corp. After a lot of paperwork and an in-person interview at the US consulate in London, we were finally granted the visa (and L-2 dependent visas for the family) in April 2017. We arranged the actual move in July 2017, giving us a short window to wind up our affairs in the UK as much as possible, and run out most of my eldest child’s school year.

We sold the house, sold the car, gave to family all the electronics which wouldn’t work in the US (even with a transformer), and stashed a few more goodies in my parents’ attic. Microsoft arranged for movers to come and pack up our lives; they arranged a car for us for the final week; and a hotel for the final week too (we rejected the initial golf-spa-resort they offered and opted for a budget hotel chain in our home town, to keep sending our eldest to school with minimal disruption). And on the final day we set off at the crack of dawn to Heathrow Airport, to fly to Boston, Massachusetts, and try for a new life in the USA.

Finding our footing

I cannot complain about the provisions made by Microsoft – although not without snags. The 3.5 hours we spent in Logan airport waiting at immigration due to some computer problem on the day did not help us relax. Neither did the cat arriving at our company-arranged temporary condo before we did (with no food, or litter, or anything). Nor did the fact that the satnav provided with the company-arranged hire car not work – and that when I tried using my phone to navigate, it shot under the passenger seat the first time I had to brake, leading to a fraught commute from Logan to Third St, Cambridge.

Nevertheless, the liquor store under our condo building, and my co-workers Amy and Alex dropping off an emergency run of cat essentials, helped calm things down. We managed a good first night’s exhausted sleep, and started the following day with pancakes and syrup at a place called The Friendly Toast.

With the support of Gigi, a consultant hired to help us with early-relocation basics like social security and bank accounts, we eventually made our way to our own rental in Melrose (a small suburb north of Boston, a shortish walk from the MBTA Orange Line); with our own car (once the money from selling our house in the UK finally arrived); with my eldest enrolled in a local school. Aiming for normality.

The process

Fairly soon after settling in to office life, the emails from Microsoft Immigration started, for the process to apply for permanent residency. We were acutely aware of the time ticking on the three year visas – and we already burned 3 months of time prior to the move. Work permits; permission to leave and re-enter; Department of Labor certification. Papers, forms, papers, forms. Swearing that none of us have ever recruited child soldiers, or engaged in sex trafficking.

Tick tock.

Months at a time without hearing anything from USCIS.

Tick tock.

Work permits for all, but big delays listed on the monthly USCIS visa bulletin.

Tick tock.

We got to August 2019, and I started to really worry about the next deadline – our eldest’s passport expiring, along with the initial visas a couple of weeks later.

Tick tock.

Then my wife had a smart idea for plan B, something better than the burned out Mad Max dystopia waiting for us back in the UK: Microsoft just opened a big .NET development office in Prague, so maybe I could make a business justification for relocation to the Czech Republic?

I start teaching myself Czech.

Duolingo screenshot, Czech language, “can you see my goose”

Tick tock.

Then, a month later, out of the blue, a notice from USCIS: our Adjustment of Status interviews (in theory the final piece before being granted green cards) were scheduled, for less than a month later. Suddenly we went from too much time, to too little.

Ti-…

October

The problem with the one month of notice is we had one crucial missing piece of paperwork – for each of us, an I-693 medical exam issued by a USCIS-certified civil surgeon. I started calling around, and got a response from an immigration clinic in Lynn, with a date in mid October. They also gave us a rough indication of medical exams and extra vaccinations required for the I-693 which we were told to source via our normal doctors (where they would be billable to insurance, if not free entirely). Any costs in the immigration clinic can’t go via insurance or an HSA, because they’re officially immigration paperwork, not medical paperwork. Total cost ends up being over a grand.

More calling around. We got scheduled for various shots and tests, and went to our medical appointment with everything sorted.

Except…

Turns out the TB tests the kids had were no longer recognised by USCIS. And all four of us had vaccination record gaps. So not only unexpected jabs after we promised them it was all over – unexpected bloodletting too. And a follow-up appointment for results and final paperwork, only 2 days prior to the AOS interview.

By this point, I’m something of a wreck. The whole middle of October has been a barrage of non-stop, short-term, absolutely critical appointments.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

Wednesday, I can’t eat, I can’t sleep, and various other physiological issues. The AOS interview is the next day. I’m as prepared as I can be, but still more terrified than I ever have been.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

I was never this worried about going through a comparable process when applying for the visa, because the worst case there was the status quo. Here the worst case is having to restart our green card process, with too little time to reapply before the visas expire. Having wasted two years of my family’s comfort with nothing to show for it. The year it took my son to settle again at school. All of it riding on one meeting.

Thursday

Our AOS interviews are perfectly timed to coincide with lunch, so we load the kids up on snacks, and head to the USCIS office in Lawrence.

After parking up, we head inside, and wait. We have all the paperwork we could reasonably be expected to have – birth certificates, passports, even wedding photos to prove that our marriage is legit.

To keep the kids entertained in the absence of electronics (due to a no camera rule which bars tablets and phones) we have paper and crayons. I suggest “America” as a drawing prompt for my eldest, and he produces a statue of liberty and some flags, which I guess covers the topic for a 7 year old.

Finally we’re called in to see an Immigration Support Officer, the end-boss of American bureaucracy and… It’s fine. It’s fine! She just goes through our green card forms and confirms every answer; takes our medical forms and photos; checks the passports; asks us about our (Caribbean) wedding and takes a look at our photos; and gracefully accepts the eldest’s drawing for her wall.

We’re in and out of her office in under an hour. She tells us that unless she finds an issue in our background checks, we should be fine – expect an approval notice within 3 weeks, or call up if there’s still nothing in 6. Her tone is congratulatory, but with nothing tangible, and still the “unless” lingering, it’s hard to feel much of anything. We head home, numb more than anything.

Aftermath

After two fraught weeks, we’re both not entirely sure how to process things. I had expected a stress headache then normality, but instead it was more… Gradual.

During the following days, little things like the colours of the leaves leave me tearing up – and as my wife and I talk, we realise the extent to which the stress has been getting to us. And, more to the point, the extent to which being adrift without having somewhere we can confidently call home has caused us to close ourselves off.

The first day back in the office after the interview, a co-worker pulls me aside and asks if I’m okay – and I realise how much the answer has been “no”. Friday is the first day where I can even begin to figure that out.

The weekend continues with emotions all over the place, but a feeling of cautious optimism alongside.

I-485 Adjustment of Status approval notifications

On Monday, 4 calendar days after the AOS interview, we receive our notifications, confirming that we can stay. I’m still not sure I’m processing it right. We can start making real, long term plans now. Buying a house, the works.

I had it easy, and don’t deserve any sympathy

I’m a white guy, who had thousands of dollars’ worth of support from a global megacorp and their army of lawyers. The immigration process was fraught enough for me that I couldn’t sleep or eat – and I went through the process in one of the easiest routes available.

Youtube video from HBO show Last Week Tonight, covering legal migration into the USA

I am acutely aware of how much more terrifying and exhausting the process might be, for anyone without my resources and support.

Never, for a second, think that migration to the US – legal or otherwise – is easy.

The subheading where I answer the inevitable question from the peanut gallery

My eldest started school in the UK in September 2015. Previously he’d been at nursery, and we’d picked him up around 6-6:30pm every work day. Once he started at school, instead he needed picking up before 3pm. But my entire team at Xamarin was on Boston time, and did not have the world’s earliest risers – meaning I couldn’t have any meaningful conversations with co-workers until I had a child underfoot and the TV blaring. It made remote working suck, when it had been fine just a few months earlier. Don’t underestimate the impact of time zones on remote workers with families. I had begun to consider, at this point, my future at Microsoft, purely for logistical reasons.

And then, in June 2016, the UK suffered from a collective case of brain worms, and voted for self immolation.

I relocated my family to the US, because I could make a business case for my employer to fund it. It was the fastest, cheapest way to move my family away from the uncertainty of life in the UK after the brain-worm-addled plan to deport 13% of NHS workers. To cut off 40% of the national food supply. To make basic medications like Metformin and Estradiol rarities, rationed by pharmacists.

I relocated my family to the US, because despite all the country’s problems, despite the last three years of bureaucracy, it still gave them a better chance at a safe, stable life than staying in the UK.

And even if time proves me wrong about Brexit, at least now we can make our new lives, permanently, regardless.

Too many cores

Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

Bootstrapping RHEL 8 support on mono-project.com

Preamble

On mono-project.com, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

On the topic of being part of a large and diverse community, including people whose identities you might not be able to personally understand

Adventure Time GIF: [Princess Bubblegum] People get built different. We don't need to figure it out. We just need to respect it

EOL notification – Debian 7, Ubuntu 12.04

Mono packages will no longer be built for these ancient distribution releases, starting from when we add Ubuntu 18.04 to the build matrix (likely early to mid April 2018).

Unless someone with a fat wallet screams, and throws a bunch of money at Azure, anyway.

Update on MonoDevelop Linux releases

Once upon a time, mono-project.com had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

MonoDevelop 7.4.0.1026 on CentOS 6

Where’s the ARM builds?

https://github.com/mono/monodevelop/pull/3923

Where’s the ARM64 builds?

https://github.com/ericsink/SQLitePCL.raw/issues/199

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?

Yes 😬

Packaging is hard. Packager-friendly is harder.

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.