News

My campaign to produce Shakespeare's Sonnets: A Graphic Novel Adaptation needs your help! Please sign up at https://www.patreon.com/fisherking for access to exclusive content and the opportunity to be a part of the magic!

I'm also producing a podcast discussing the sonnets, available on
industrial curiosity, itunes, spotify, stitcher, tunein and youtube!
For those who prefer reading to listening, the first 25 sonnets have been compiled into a book that is available now on Amazon and the Google Play store.

Saturday, 28 March 2020

An improved (fairer) playlist shuffling algorithm

Lots of people find playlist shuffling insufficiently random for a variety of reasons, some of which have been addressed by the industry.

There's one aspect that my wife and I haven't seen, though, and that's making sure that no songs get "left behind" whenever a playlist is reshuffled, whether intentionally or by switching back-and-forth between playlists.

In an attempt to sow seeds, I've just put together an example of an improvement that can easily be applied to any of the existing shuffle algorithms.

Tuesday, 24 March 2020

The value of git (re)parenting

I'm not a command-line person, I may have grown up with it but... let's just say I've developed an allergy. I want GUIs, I want simplicity and I want visual corroboration that I'm doing what I think I'm doing. I really don't see why I need to become an expert in every tool I use before it can be functional.

This is why I adore SourceTree, and (last time I used it) GitEye. Easier, more intuitive interface, and visualizations to help you see whether what you're doing makes sense. You're less likely to make mistakes!

I'm also (and for similar reasons) a huge fan of squashed merges in Git; one commit per feature / bugfix / hotfix, and (theoretically) the ability to go through the individual commits on my own time if I ever need to. I use interactive rebasing* a lot to achieve a similar result, but getting other devs to use it responsibly is a lot more trouble than teaching them to add --squashed to the merge command. The downside is that because I've become used to interactive rebasing to squash my commits before merging, I've also become used to using regular merges and seeing a neat line on the branch graph showing me where my merges originated. Squashed commits don't give you that. And that's sad. It's also problematic - this morning I freaked out because I thought the code in a branch I'd squash-merged into wasn't where it needed to be, all because I was relying on those parenting lines and didn't immediately think to do a diff**.

* Ironically, from the command-line. It's the one thing I find less intuitive and more risky using SourceTree.

** While we're talking about branch diffs, if you're using Bitbucket don't trust the branch diff - the UI uses a three-dot diff and what you actually want is a two-dot, ie git diff branch1..branch2

So, after much surfing around the internets and learning lots more about git's inner workings than I care to, I came across an elegant little solution and have wrapped it with a bash script in the hopes that it'll be found useful. All you have to do is run this script as follows:

./add_parent.sh TARGET_COMMIT_ID NEW_PARENT_COMMIT_ID

Sunday, 26 January 2020

Lessons from my first Direct-to-Print experience

The direct-to-print paperback edition of Shakespeare's Sonnets Exposed: Volume 1 is now up for review, after I finally ironed out the formatting kinks last night and finished fiddling with the cover around 2am. So now that I've had a chance to sleep on it, and re-re-re-review everything before submitting it, I have a few notes for anyone who wants to self-publish a book on Kindle.

1. Apple's Pages is a fantastic tool for a number of reasons, but what it produces by default really isn't great for reflowable epubs or paperback formats, which are both important if you want your work to be accessible, readable and attractive to your readers. It's a good place to start, though, just as Microsoft Word is, because it exports to Word and once you have a Word doc you can then import your work into Amazon's Kindle Creator.

2. I wish I'd begun with Kindle Creator, even though my intention is to publish on other platforms as well. Kindle is not only the easiest platform to get started on - and is probably the most accessible for your audience - but from a formatting perspective is effectively the lowest common denominator: it's tough to use custom fonts for Kindle publications, and I suspect they've made it so intentionally in order to standardize the reading experience.

My advice is to sort out the formatting for the ebook, publish it (see step 5), convert it to EPUB for other platforms, then tweak the formatting (and possibly content) for print publishing.

3. Don't bother adding your ISBN barcodes to the books yourself. Even if you have one issued for your paperback, it's best to use the KDP generated one for the Kindle direct-to-paperback offering and reserve any others for paperbacks where you have to add the code to the cover manually.

4. You can download cover templates here. I didn't realize that and I made my own, which in retrospect was silly.

5. After you've "published" your book, you'll have a KPF file that you can upload to KDP. From Converting KPF to EPUB format:

I recently managed to successfully convert kpf to epub format using jhowell's KFX conversion plugin for Calibre. Just install the plugin and use drag-and-drop to load your kpf file into Calibre. Then convert the kpf file to epub in the normal way using Calibre. Save your new epub to your desktop and then run Epubcheck on it to ensure that it is a valid epub(it always passes).

If you run into any issues with these suggestions, please let me know in the comments!

Monday, 20 January 2020

ISBN codes for Dummies

Step 1: Acquire ISBN codes. For South African residents this is a free service (thank you, NLSA!), and all you have to do is request them and assign them (there's really no need to pay anyone any money, simply look up contact details on their website and call or email until you reach someone).

It's important to note that not only does each format (paperback, hardcover, etc) need its own ISBN, but each e-book format (eg. epub, mobi, PDF) does as well!

Step 2: Generate the actual barcode using the ISBN 13 section of this free online generator. As explained here:
Before making an ISBN barcode, the user must first apply for an ISBN number. This number should be 10 or 13 digits, for example 0-9767736-6-X or 978-0-9767736-6-5. Once the ISBN number is obtained, it should be displayed above the barcode on the book. All books published after January 1, 2007 must display the number in the new 13-digit format, which is referred to as ISBN-13. Older 10 digit numbers may be converted to 13 digits with the free ISBN conversion tool.

The last digit of the ISBN number is always a MOD 11 checksum character, represented as numbers 0 through 10. When the check character is equal to 10, the Roman numeral X is used to keep the same amount of digits in the number. Therefore, the ISBN of 0-9767736-6-X is actually 0-9767736-6 with a check digit of 10. The ISBN check digit is never encoded in the barcode.
Simply remove the hyphens (dashes) and the check digit from your ISBN, paste it into the text box and hit "refresh". I recommend changing the image settings to PNG format with 300 DPI. You can also change the colors if you wish.

Step 3: You now have ISBNs and their barcodes, but for a professional look you'll want to right barcode title in the right font. That's as simple as adding ISBN 0-9767736-6-X above the barcode image. The free generator uses the Arial font, but the more traditional font is monospace.

Monday, 23 September 2019

@mysql/xdevapi joy!

after hitting a wall last night with the existing node.js mysql packages, which for some reason have been years behind integrating with mysql 8's authentication, i discovered @mysql/xdevapi. the documentation's a bit heavy and not really comprehensive, but now that i've figured out how to use it, i'm very pleased with how it operates! unfortunately, i'm going to have to roll my own migrations management, but that's a small price to pay for being able to interface with the latest mysql databases in an intelligent way.

in the interests of reducing friction for other adopters, i've rolled some sample queries into my database creation script. enjoy!

UPDATE: i've subsequently learned that table joins have not been implemented, so to perform those you'd have to use the session.sql method and use it in the same way (the same promise chaining) as the CRUD methods. seems like a serious oversight, but whatever.

Friday, 23 August 2019

Handling emails with node.js and Mailparser

After sorting out mail forwarding and piping emails with postfix, I then needed to understand how to handle emails being POSTed to an API endpoint.

To parse an email with node.js, I recommend using Mailparser's simpleParser. I'm using express with bodyParser configured as follows:
app.use(bodyParser.json({
 limit : config.bodyLimit
}));
In your handler:
const express = require('express');
const router = express.Router();
const simpleParser = require('mailparser').simpleParser;
or
import { Router } from 'express';
import { simpleParser } from 'mailparser';
and then
api.post('/', (req, res) => {
    simpleParser(req)
        .then(parsed => {
            res.json(parsed);
        })
        .catch(err => {
            res.json(500, err);
        });
});
Mailparser is excellent, and documented, but the documentation assumes that we're familiar with the email format. Fortunately, oblac's example email exists for those of us who aren't!

To test, send the example email to the endpoint via curl:

curl --data-binary "@./example.eml" http://your-domain-name/api/email

or Postman (attach file to "binary").

And we're good to go!

Thursday, 22 August 2019

Mail forwarding and piping emails with Postfix for multiple domains

The past few weeks I've been learning about mail servers, and the biggest takeaway for me is that it's generally worth paying someone else to handle the headache. As usual, the obstacles to configuring a mail server correctly are primarily in the lack of useful documentation and examples, so I'm putting this down here in the hopes that it'll be helpful to like-minded constructively-lazy devs.

While I'm happily using mailgun for mail sending (after much frustration I threw in the towel trying to integrate DKIM packages with Postfix to get my outgoing emails secured) I was certain that I could at least have my mail server handle mail-forwarding for my multiple domains, and while that proved to be fairly straightforward I then tumbled down a rabbit-hole trying to get Postfix to pipe certain emails to a node.js script for processing.

    Here are the steps you'll need to take:
  • Set up your A and MX records for your domain, the A record @ pointing to the IP address of the server you’re going to be receiving emails on and MX with the hostname @ and the value 10 mail.your-domain-name

    If your mail server is not the same as your primary A record, simply create an additional A record mail pointing to the correct IP address.
  • sudo apt-get install postfix
    Select "Internet Site" and enter your-domain-name (fully qualified)
  • sudo vi /etc/postfix/main.cf
    • Add mail.your-domain-name to the list of mydestination values
    • Append
      virtual_alias_domains = hash:/etc/postfix/virtual_domains
      virtual_alias_maps = hash:/etc/postfix/virtual
      to the end of the file
  • sudo vi /etc/aliases
    curl_email: "|curl --data-binary @- http://your-domain-name/email"
  • sudo newaliases
  • sudo vi /etc/postfix/virtual_domains
    example.net   #domain
    example.com   #domain
    your-domain-name   #domain
    (the #domain fields suppress warnings)
  • sudo postmap /etc/postfix/virtual_domains
  • sudo vi /etc/postfix/virtual
    info@your-domain-name bob@gmail.com
    everyone@your-domain-name bob@gmail.com jim@gmail.com
    email_processor@your-domain-name curl_email@localhost
    @your-domain-name catchall@whereveryouwant.com
    ted@example.net jane@outlook.com
  • sudo postmap /etc/postfix/virtual
  • sudo /etc/init.d/postfix reload

You should be able to find your postfix logs at /var/log/mail.log. Good luck!

Monday, 5 August 2019

FreeVote: anonymous Q&A app

I can't believe something like this doesn't exist already! It's far from a polished product, but it's already pretty functional. Feel free to send me feedback, it'll help me prioritize the long list of improvements I've already come up with. This was inspired by my coworkers, every retrospective we all write down answers to sensitive questions "anonymously" on folded papers that one team member gathers and reviews... this way we can do it truly anonymously and all see the results.

FreeVote

Saturday, 6 April 2019

Hosting my own podcast

I've been having trouble with podcast garden, turns out they're a really unprofessional outfit; more than half the time their website is inaccessible, and they have poor customer service. Also, they've taken a lot more money off me than I agreed to and I've opened a dispute with PayPal, hopefully they'll sort this all out.

In the meanwhile, I've spent the last couple of days migrating to a custom solution, I honestly don't know what I'd do if I wasn't a software developer. The Shakespeare's Sonnets Exposed podcast is now being hosted right here on industrial curiosity, and not only am I no longer at the mercy of other people's incompetence, it's a much cheaper solution, too!

Now I just need to find the time to make my solution easily available to the public...

Tuesday, 19 February 2019

Effective Technical Interviewing with GitHub

Throughout my career I've encountered many problems with hiring processes, some mildly annoying, some decidedly infuriating, and some utterly baffling. Of all of those issues, few have pressed my buttons as consistently as those relating to how technical assessments are generally handled.

The cost of an ineffective technical interview to the company can be greater than the hours invested in interviewing or the lack of resources until the right candidate is found, with many regulations making it difficult to get rid of a low-performing employee and most situations requiring a learning curve that extends beyond legal probation periods.

The cost of a technical interview to the applicant can be even greater, as interviewing at multiple companies or when already employed leaves little time to do the assessments and it's usually a lot of effort, for no compensation, that produces nothing of value.

The bigger companies have found a reasonably good solution to the technical assessment, but phone screening and days full of whiteboard interviews are simply not affordable for the smaller players. The focus of this article is on technical assessments wherein the candidate is expected to produce a solution, or set of solutions, within a set period of time.

There are two major downsides to these coding assessments:

1. It is of little consequence whether this is done in an in-office setting or as a take-home assignment, it is very difficult to extract meaningful information about the strengths and weaknesses of a candidate from the results of an assessment done in this manner. There are many variables that may influence the results of this kind of an assessment, the candidate's ability to code under stressful conditions for a start, and usually there's not enough interaction with the candidate during the coding process to assess collaborative behaviour.

2. The work done in producing these solutions is, for all intents and purposes, wasted effort. Candidates are consistently asked to demonstrate their skills by producing arbitrary code that would not be of use or interest to anyone going forward.

In this article I'd like to address two possible strategies using GitHub that could significantly improve the effectiveness and overall outcomes of this important step in a candidate's application.

First strategy: Existing GitHub issues

The world of open-source is full of real problems that require solving, and nine times out of ten (for smaller organizations in particular) in software packages that are used by the interviewing company.

One possible strategy is to break down the technical skills you require from your candidates - not just the languages and frameworks, but problem spaces as well - and find open issues on GitHub with similar profiles.

Second Strategy: Porting an issue to GitHub

It is a common misconception that everything in a proprietary codebase must be kept under lock and key. With the right license, there are plenty of issues that a company might have that can be isolated from the codebase, ported to GitHub, and then reincorporated once solved.

Similarly to the first strategy, it is important to break down the technical skills you require from your candidates and find issues that would highlight their relevant strengths and weaknesses. The act of "open-sourcing" this code would be as simple as establishing the right license and creating a repo. Ideally, it would be code that others might find useful, too!

With this strategy, there would need to be a different problem for each individual candidate, as the moment one candidate has come up with a solution there would be little to stop another from taking a peek and little value in multiple developers repeating the same effort.

The advantage, obviously, is that work put into interviewing would directly benefit the company, but the moment you're benefitting from the candidate's efforts they would be entitled to at least some compensation, in addition to being able to show their work in other job applications.

When an applicant has successfully delivered a solution, you could compensate them directly, which may be preferable, or you could use Gitcoin, where you can set a bounty on your issues for immediate payment on the acceptance of a PR.

Either way, it'd be a small price to pay for development, the candidate would have earned a little money (which most candidates desperately need), and you'd have useful information for your hiring process.

Conclusion

Contributing to GitHub projects will show more than just a candidate's code quality; it can also show communication skills and collaborative behaviour. Additionally, choosing or defining projects with strict requirements for code conventions and unit tests will push your candidate more significantly than just mentioning that you'd like to see testing done.

An accepted PR (pull request) will make the world a (slightly) better place; it would be a good deed on your part, a good deed on the candidate's part, and the resulting effort would have actual value.

It is also far more likely that a candidate will be motivated to solve a real problem than a fictitious one, and their efforts will be rewarded by becoming a part of their portfolio. As somebody who has predominantly worked on proprietary code, I can testify to the utility of being able to direct a potential employer to my GitHub account!

With either strategy, everybody wins. For the developer, the technical assessment becomes real world experience that they can show off to others, and the employer gets to see real-world coding performed under real-world conditions.

Wins all around!

An improved (fairer) playlist shuffling algorithm

Lots of people find playlist shuffling insufficiently random for a variety of reasons, some of which have been addressed by the industry . ...