I wrote last year about how I’m using Advent of Code to learn TypeScript – I often know what I’d like to write in Python, and sometimes the “How do I do that in TypeScript?” question doesn’t have an obvious answer.
What was I trying to do?
Some puzzles can be solved using memoization: when a function is complicated to calculate, and calling it with the same arguments always gives the same answer, you can note down “called with x, return y” somewhere each time the function runs. Then, if you get asked the same question repeatedly, you can cut out huge amounts of work by just handing back the previously-calculated answer. For some puzzles (and some real life cases!), this can take the run time from “hours” or “years” down to “a fraction of a second”. It’s like a magic trick.
It’s not always obvious whether I can break a big problem down into small repeatedly-called steps like this. The Python standard library makes trying it out incredibly easy – you can import @cache
and just put that above a function, it just works. Example from the Python docs:
@cache def factorial(n): return n * factorial(n-1) if n else 1
For a good explanation of how this all works, this short book chapter is free online, with Python and JavaScript examples – it’s part of “The recursive book of recursion” by Al Sweigart. Formatting at the top of those pages is messy but the content’s very readable.
You can make all this work without making an @cache
decorator, so a fair question is “Why would you do that in TypeScript?” As I said in my post on generator expressions in TypeScript: Honestly, you probably shouldn’t. It’s a fun learning exercise though.
A little bit about dynamic programming
The technique I’m describing here is one way to do “dynamic programming”. That name sounds daunting, but I’ve found it similar to the pathfinding algorithms I wrote about learning:
- the code and how the approach it works is surprisingly short and simple once you look into it
- the idea of doing things that way in the first place must have been incredibly hard to come up with, but now it’s known it’s easy to apply
- when you find a problem that it’s a good fit for, it has unbelievable performance, cutting out huge swathes of calculations.
“Dynamic programming” involves breaking a problem down into overlapping sub-problems, and using those overlaps to avoid repeating the same calculations. You might not find many uses for it day-to-day, but it is a real-world technique and not just for coding puzzles: here’s a friendly 11-minute lightning talk about its use in DNA sequence alignment by Adam Gaidi. There’s also a nice “graphical introduction to dynamic programming” post from Avik Das that’ll make you confident in applying this technique.
I was confused about where the fancy name comes from, as this approach doesn’t sound very dynamic. Wikipedia had the story: Its creator Richard Bellman says he chose the name to impress his superiors and make sure his research in it got funding.
Now, back to making that @cache
decorator, like in Python.
Can TypeScript do this?
Decorators aren’t quite part of JavaScript yet, but it turns out they have a long history of getting used anyway – they’ve been in discussion since 2013, and various transpilers like Babel and TypeScript have made various versions of them available already. The TC39 proposal is at stage 3 now, so it’s just about ready to turn official.
TypeScript had “experimental” decorators for years, and recently (in TypeScript 5.0) moved to a new version of decorators that matches the JavaScript proposal that’s almost ready to go into the standard.
For someone newish to these languages, unpicking all that was confusing! Currently (Node 22, TS 5.7.3):
- You can use decorators on classes and methods
- You can’t use them on standalone functions – that’s been discussed a fair bit, but isn’t happening any time soon.
Let’s see the neatest way I can get memoization implemented…
Memoizing a function, no decoration required
If you have a calculation you’d like to memoize, for example, printing the first 50 Fibonacci numbers:
function fib(n: number): number { if (n === 0) return 0; else if (n === 1) return 1; else return fib(n-1) + fib(n-2); } for (let n=0; n<50; n++) { console.log(fib(n)) }
This takes about 3 minutes on my machine, there’s a lot of recalculation! You can adapt that function to remember its results.
const results = new Map<number, number>(); function fib(n: number): number { let output = results.get(n); if (output === undefined) { if (n === 0) output = 0; else if (n === 1) output = 1; else output = fib(n-1) + fib(n-2); results.set(n, output); } return output; } for (let n=0; n<50; n++) { console.log(fib(n)) }
This version runs in 0.05 seconds, about as fast as “Hello, World!”. We’ve added:
- A note of all our
results
– at module level, so it persists between function calls. - In
fib
: ifresults
has the answer for thisn
already, just return that. - If
results
didn’t have the answer, calculate it (using any lowern
results via those recursive calls), note the answer, then return it.
Looking up a Map stays very fast even as the number of entries gets huge – and in Node, you can store over 16 million entries in there.
This version works just fine, but I’d like to do something easier to use:
- Anywhere I want to try memoizing, there’s a fair bit of bespoke code to add around the usual function.
- The actual work of the function is now surrounded by all this other stuff – wasn’t it much easier to see what the first 3-line version of
fib
was doing?
Memoizing a function, by wrapping it
Since an @cache
decorator won’t work on a bare function, I could surround this function with a class and make it a method… but that seems pointless. If it makes sense to use standalone functions, what’s a simple way for me to wrap them?
The best I came up with was this:
const fib = cache((n: number): number => { if (n === 0) return 0; else if (n === 1) return 1; else return fib(n-1) + fib(n-2); }); for (let n=0; n<50; n++) { console.log(fib(n)) }
To change from that first, un-memoized example, this is much easier:
- no need to add that
results
object here - change the function declaration line – but not by much!
- add a
);
after the end of the function to close thecache
wrapping
For this, I made a reusable cache
function that you can wrap functions in – it’ll work with lots of function signatures. When it’s wrapped, fib
still appears in IDE’s autocomplete and shows the correct parameter list when you hover.
Here it is:
export function cache<T extends Function>(decoratedFunction: T): T { const results = new Map<string, any>(); return function() { const hashKey = JSON.stringify(arguments); let output = results.get(hashKey); if (output === undefined) { output = decoratedFunction.apply(undefined, arguments) results.set(hashKey, output); } return output; } as unknown as T; }
This is very similar to the “edit the function itself” example I showed before. New things here are:
- this wrapper takes the original function and replaces it with a wrapped version.
- type declarations help it be used safely (wrap any function) and to preserve the original function’s signature (takes a
T
and returns aT
). - uses a JSON-serialized version of the arguments to check “have we been called with these arguments before”.
I experimented with various ways to declare the returned function to have the same parameter types as the decorated function, and for the results map to declare the right return type instead of any
, but it was all hard to get right and very unreadable. I think the as unknown as T
double assertion is the clearest way to show the intent to the compiler and to future readers.
Memoization with decorators
Sometimes, it makes sense to memoize a method. If you have a class where you configure a maze or a logic circuit when an instance is constructed, then for each object you’d have a “what’s the answer for this calculation” that might be asked repeatedly, would be instance-specific, and wouldn’t change over time. If that’s your situation, you can have an @cache
to easily decorate any method:
export class Maze { @cache bestPathBetween(start: Pos, end: Pos) { // logic here } }
Here’s how I made that work:
function cache(decoratedMethod: Function, context: ClassMethodDecoratorContext) { const resultsByObject = new WeakMap<object, Map<string, any>>(); return function(this: object) { let results = resultsByObject.get(this); if (results === undefined) { results = new Map<string, any>(); resultsByObject.set(this, results); } const hashKey = JSON.stringify(arguments); let output = results.get(hashKey); if (output === undefined) { output = decoratedMethod.apply(this, arguments); results.set(hashKey, output); } return output; } }
- This
cache
function has the right signature for TypeScript decorators, so when you use@cache
on a method it’s recognized and just works – no type annotations needed here. - Just like the version for standalone functions, it’ll work with a range of parameter and return types.
- The extra bit here: we have a separate
results
store for each class instance. This uses a WeakMap – with a normal Map, this cache would prevent objects being garbage collected, but WeakMap says “if it’s just me referring to this object, let it go”. I read about these a while ago, nice to have a chance to use it.
I had some trouble with tsconfig
when setting this up … it took me some time to work out what was going on! Reading what all the settings mean and what’s recommended for my version of Node helped me find the problems.
- Elsewhere in my AoC solutions, I was using new-ish Set methods like union and difference. The docs say these are available in Node 22, but TS gave me a “TS2339: Property difference does not exist on type Set” warning when I tried to use them.
- Some time ago, I’d set
"target": "esnext"
because that gets the Set methods working. - Turns out that
esnext
target can generate JavaScript that’s a bit too futuristic – when I put decorators in my TS, they converted to JS that assumes the stage 3 decorators proposal is standard. That won’t run! - I changed back to the recommended
es2022
target, realising what I need to make those Set features work is actually alib
that has the definitions. Having a look in the TypeScript source for lib let me find that I need this esnext.collection lib.
"compilerOptions": { "target": "es2022", "lib": ["es2023", "esnext.collection"],
Some examples of using these in Advent of Code
If you don’t want to know any puzzles these might be used in, look away now!
.
.
.
.
.
For the method decorator: In 2015 day 7: Some Assembly Required, I made a class that has wires connected up with logic gates. Checking what signal ended up on different wires worked fine in the small example circuits, but for the huge real puzzle circuit, without memoization, my code would just run and run. I left it going for 10 minutes with no end in sight. Adding @cache to the getValue method makes it run in just a tenth of a second.
For the function version: In 2024 day 11: Plutonian Pebbles, stones with numbers on them keep changing their label or splitting into more stones. My solution breaks this into small steps, and caches the relevant one. Despite my eventual answer being well into the trillions, looks like there’s only 100k different “stone: x, blinks: y” combinations ever asked.
And one more function version: In 2024 day 19: Linen Layout, we need to lay out towels in various patterns. I wrote functions that are short and readable, work fine with the example patterns, and would probably never finish on the real puzzle input (I left it for hours just to check). Adding a cache
makes them run in under a second.
Leave a Reply