Journal

2024-01-08 My plans for 2023 - final report

At the beginning of the last year I wrote about my plans for 2023. As 2023 came to an end, it’s a good moment to look back and reflect on them a bit.

While 2022 was sort of disappointing, 2023 was definitely not. My day job continues pretty uneventfully (which is of course good). I started doing “weekly reviews”, which is still not very easy for me (and I hope to learn to do them better next year). I didn’t yet start with quarterly and yearly reviews (well, this post doesn’t really count, since I only talk here about things I’m doing “in public”), but I hope to learn to do those, too. Last 10 years or so taught me that it is possible (and even not extremely difficult) to instill new habits, so I’m pretty confident this will go fine.

As I wrote (almost) a year ago, I wrote the booklet about personal accounting in Ledger. As expected, it is not exactly “popular”, but Leanpub tells me that 19 people found it worth buying, and even if it is (just a bit) less than I hoped for, it’s still fine with me. And of course big “thank you” to all the readers who trusted me with their money and (hopefully) time they spent reading what I wrote!

My plan to work regularly on the Elisp book was, I have to admit, a complete failure. Well, I really hope it’s going to be better this year. As I mentioned previously, I started to devote my Monday “writing slot” to this, and it seems to work pretty well. The main issue with this is that the changes I’d like to make are going to be more time-consuming than I expected – but I’m not in a big hurry, so that’s ok.

Next comes my “secret project”. Last time I told the story of how I became a Whovian. For the past 3 years I diligently translated subtitles for way over a hundred episodes to Polish (which took me over 400 hours!), improved Emacs Subed mode and later (in July 2023) started my Doctor Who-related blog, called Crimson Eleven Delight Petrichor. The blog took off both very well and very badly. How is that possible? Let’s begin with the bad news: I really hoped for getting some support from readers, both financial or moral (like letting me know that someone is actually reading it). I admit that I didn’t do much “advertising”, but I dropped a link here and there in hope that this is going to be enough. Well, it wasn’t. I did consider spending some cash on Facebook advertising, but it felt slightly wrong to use Facebook to advertise a website which strives to respect the readers’ privacy. I will still try to publicize it, but I have much less hope than I had half a year ago. This also means that I’m not going to write one post every two weeks there – it turned out to take more time than I expected, and this pace is difficult for me to sustain. My current plan is to write at least one Doctor Who post every four weeks and come back to publishing here more frequently, so that I will still write one blog post per week. The only thing to change is the proportion of posts on both blogs.

On the other hand, I am very happy with what I wrote in the last 6 months. I am, of course, aware that I am not a great writer, although I hope that I’m at least a decent one. But rewatching Series 1 of Doctor Who and thinking about it in depth was a fantastic (!) experience, and I’m looking forward to analyzing the later series, too. Quite a few episodes contained even more interesting material to think about than I expected, which was a pleasant surprise. Also, I did a word count (well, more like estimation than count – I write the blog in Org mode, and various markup elements like #+begin_quote block delimiters count as words, too), and it turned out that I wrote well above 30 thousand words. Obviously, quantity does not translate directly to quality (especially in my writing which tends to be rather verbose…), but it is a pleasant thought that what I wrote is an equivalent of a (more or less) 80-page booklet.

Also, the engine I used to write on Crimson Eleven Delight Petrichor, Org Clive, turned out very well. It’s still not 100% feature-complete (there are at least two features I’d like to add: page modification times and exporting only the pages that actually changed), and it has a few rough edges, but overall it turned out to be very nice to work with. If you want to set up a simple website or a blog, controlled from Org mode, give it a try (and make sure to let me know)!

As for the “two new books”, those didn’t work out at all. The booklet documenting the process of my studying documentation about browser extensions was fun to write, but I’m afraid it’s less fun to read than I hoped for. At least, only two people considered it worth their money (thank you both, anonymous readers!) Also, what I learned aling the way about browser extensions makes the other project – an actual book teaching to write them – less appealing than I expected. I am still on a fence – I might try to write that textbook – but even if so, it will have to wait a bit.

You may ask, what about 2024? One year ago I wrote that 2023 was going to be a writing year, and it definitely was. Well, as I said above, I’m not abandoning writing at all (of course!), but I’ve decided that 2024 will be a learning year. There are a few things I’d like to learn. For example, I’d like to make a deeper dive in PostgreSQL, I’d love to learn a bit about some frontend technologies, and some other stuff. Since this is more private-oriented (it’s not going to result in a lot of blog posts, for instance), I won’t write more about my 2024 plans here (nor will I make regular updates) – but maybe I’ll try to do that again in a year. We’ll see!

Anyway, even if not everything went as I hoped it would, I’m still thankful for 2023. My side projects I write about here are not everything for me – this year was pretty good profesionally, and I also had some very good things going on in my private/family life, so I’m overall very happy with it. In fact, I’m looking forward to all the great stuff God has prepared for me in 2024!

CategoryEnglish, CategoryBlog

Comments on this page

2023-12-25 Merry Christmas 2023

As usual at this time of year, let me wish all of you Merry Christmas! And also as usual, I promise to say a decade of the Holy Rosary for everyone reading my blog.

CategoryEnglish, CategoryBlog, CategoryFaith

Comments on this page

2023-12-11 Replacing TeX control words behind the point

Two weeks ago, a friend from Polish TeX Users’ Group mailing list asked about an Emacs tool to replace control sequences with their Unicode counterparts. I also have this need from time to time, and I usually go with the TeX input method. He is not satisfied with it, though, because it replaces too much for him – for instance, he doesn’t want a_1 to get translated to a₁. He remembered some utility (written by another Polish TeX user) which replaces a TeX sequence with a Unicode equivalent, but only on demand. Since that one seems to be lost in the depths of time, he was left without a solution.

Being me, I decided to write it – after all, it should be fairly easy even for a moderately experienced Elisp hacker. So, here’s a proof of concept.

(defcustom TeX-to-unicode-alist
  '(("in" . "∈")
    ("emptyset" . "∅"))
  "Alist of LaTeX control words and their Unicode equivalents."
  :type '(alist :key-type string :value-type string))

(defun TeX-to-unicode ()
  "Replace a TeX control word with its Unicode equivalent.
The control word must be a sequence of one or more letters after
a backslash and be located directly behind the point."
  (interactive "*")
  (when-let (replacement (and (looking-back "\\\\\\([a-zA-Z]+\\)")
                              (alist-get (match-string 1)
                                         TeX-to-unicode-alist
                                         nil nil
                                         #'string=)))
    (delete-region (match-beginning 0) (match-end 0))
    (insert replacement)))

One thing I have learned recently is the when-lat macro. It works much like let*, but if any of the bindings is nil, it does not evaluate its body and just returns nil. (Go read its docstring if you find such a concept useful – in fact, it has a bit more features, and there are others like it, for example if-let, while-let and a few others.)

This code could be easily made more performant – looking up stuff in an alist would be most probably faster with symbols than with strings, and a hash table would be faster if there were really many control words in it. On the other hand, this is an interactive function, not something running thousands times in a loop, so this probably doesn’t matter.

Of course, filling up TeX-to-unicode-alist is the real challenge here. In this PoC I just put two control words in, but TeX has hundreds of control words, and Unicode has hundreds of symbols. Making a comprehensive list is a lot of work. Good thing is, someone already did it – after all, Emacs has the TeX input method! Our next problem is how to leverage the existing table Emacs uses. A quick search reveals that the table is located in emacs/lisp/leim/quail/latin-ltx.el. About 85% of that file is just one invocation of the latin-ltx--define-rules macro which contains (more or less) what we need. Unfortunately, using it is far from straightforward. I can envision three strategies. One is just copying that file, deleting things I don’t need and converting the list to the format I need. This sounds a bit ugly, but makes sense, and if I wanted a production-grade, actually useful solution, I could do this. One thing that makes it a bit difficult is that it doesn’t contain the list of greek letters, for example – Emacs uses the fact that it is possible to map names of TeX commands for greek letters to Unicode names for their characters. Clever, but doesn’t help us a lot.

Another way is to analyze what the latin-ltx--define-rules macro does – it must put the results somewhere, after all – and using those results. Unfortunately, it seems that the results are in a format which is hardly usable for our purpose (see quail-package-alist to see for yourself!). It’s still possible, of course, to do an automated conversion, but it’s a bit of not fun work I’d prefer to avoid if possible.

Yet another is doing some clever trickery to redefine things like latin-ltx--define-rules and eval​ing the latin-ltx.el file. (This is probably doable, but rather tricky – the file contains both the definition and invocation of that macro, so for this to work, we would probably have to temporarily redefine defmacro. This is definitely not the rabbit hole I’d prefer to go into…)

Let’s do something else instead. When researching for this post, I ran M-x apropos-value RET omega RET, hoping to find the variable keeping the data about the TeX inout method. (I imagined that omega is probably not part of a value of many Emacs variables, but should appear in any list of TeX control words or related places. Of course, now that I saw quail-package-alist, I know it wasn’t going to work.) I found something else instead: org-entities. This is almost exactly what we need. When exporting, Org can translate various things into (among others) LaTeX, HTML entities – and UTF-8. Bingo! Every entry in org-entities is a list of strings (well, some entries are strings, and they are a kind of comments, used to make the output of org-entities-help nicer), the second of those strings is a LaTeX command (by the way, for most stuff we discuss here, plain TeX and LaTeX commands are the same), and the last, sixth entry is a UTF-8 string. Since my command only allows control words, we’ll disregard entries like \'{A}, but use the ones of the form: backslash, one or more letters, optional {}. (If you really need to input accented letters in your file, the go-to solution is to either use a suitable keyboard mapping in your OS, or use a suitable Emacs input method, or – if you only need this occasionally – use C-x 8 followed by an accent character and a letter.)

One thing I discovered when coding this was that org-entities contained some symbols more than once. It turns out that Org mode has more than one name for some symbols. For example, unlike in TeX, you can say both \AA and \Aring to get Å. On the other hand, like in TeX, you can say both \le and \leq to get . Unfortunately, Org mode maps both of them to \le when exporting to LaTeX, which means that my trick with org-entities will not put \leq on the list. That’s not ideal, but not very bad, either. Anyway, I decided to remove the duplicates from the resulting list, just for the sake of elegance.

Since I did not want to include all of the entries in org-entities (it contains a lot of things like accented letters, or horizontal whitespace like \hspace{.5em} and other stuff I didn’t want to have in TeX-to-unicode-alist), I did not want to use mapcar. The usual way to perform transformations on lists involving omitting some elements and transforming others is either composing a map and filter functions (in Elisp, that would be mapcar and seq-filter), or resorting to reduce (seq-reduce in Elisp). I went the latter way, without a good reason – the choice is a matter of personal preference (or a whim). Then, I applied seq-uniq to delete the duplicates (since the entries are conses of strings, I needed to provide a suitable TESTFN) and nreverse to preserve the order of the entries.

(defcustom TeX-to-unicode-alist
  (nreverse
   (seq-uniq
    (seq-reduce (lambda (acc entity)
                 (when (listp entity)
                   (let ((TeX (nth 1 entity))
                         (utf (nth 6 entity)))
                     (when (string-match
                            "\\\\\\([a-zA-Z]+\\)\\(?:{}\\)?"
                            TeX)
                       (push (cons (match-string 1 TeX) utf) acc))))
                 acc)
               (append org-entities-user org-entities)
               '())
    (lambda (a b) (string= (car a) (car b)))))
  "Alist of LaTeX control words and their Unicode equivalents."
  :type '(alist :key-type string :value-type string))

And that’s pretty much it for today! As usual, Emacs turns out to be an extremely malleable tool you can shape in almost any way to suit your needs. And also as usual, let me remind you that if you want to learn to write little utilities like this, one of the best sources you can start with is the classic Introduction to programming in Emacs Lisp by the late Robert J. Chassell. If you want to dig deeper, you can then buy my book about Emacs Lisp, Hacking your way around in Emacs, which is (sort of) a spiritual successor to that book.

Happy hacking!

CategoryEnglish, CategoryBlog, CategoryEmacs, CategoryTeX, CategoryLaTeX

Comments on this page

More...