Hello everyone, it’s time for your Advent of Code 2021 Day 14 cruise log.
Part one code was written pretty quickly got some null pointers due to not filling the updated polymer. Eventually, it did work and I got my first star. Felt easy, too easy.
I start running the part two code and there it breaks. In the example, B is the most commonly insertable value while H is the least commonly insertable value. Well, from two rounds and up. B is the most common value when the polymer is complete and H is the least common value. There must be something there. I decided to print some random logs to figure out if there actually is something there. If I can infer a formula to determine the value of B and H after X rounds then I can change my code completely to make it work for 40 rounds.
B seems to always have the highest count value and H always the smallest. While thinking about the pattern I decided to let the code run as maybe it will allow for a painless exit. Failed a few times, updated the code from using char arrays to list of chars and counting with integers to big integers. IntelliJ decided to crash my party by crashing out of nowhere on some auto-completion. If IntelliJ crashes with a 32GB gaming laptop then nothing is safe. All attempts were utterly pointless. Even though it felt promising, the most successful attempts all ended with memory heap errors.
Looks like this isn’t the way to solve this. Back to math-ish stuff. There is something with the pairs, it’s the best clue we’ve got. I went about this all wrong. Each pair gets replaced with two new pairs, can buckets solve this too?
After updating my code to get a map of polymerised pairs instead of a collection of characters I comment out all my result code and run tests in debug mode. You know, to figure whether I am going in the right direction. At first, there is something strange, I only have two pairs and they’re the first two. Then I notice that the filtering code from the previous code iteration is the same so it doesn’t pick up all the elements. Looks like I’ll still have to spend some time with this one.
However, it didn’t take long to fix and finally, I had the right pairs into my original bucket. From there, I run one iteration, still debugging, looks like I have the right pairs after a round, then two. Now we’re cooking with gas! All that is left to do is refactor the counting code and voila.
Except that after over an hour on this my brain is scrambled and I probably need a break but I feel like I’m five minutes away from finishing (narrator note: he wasn’t). I keep looking at the code, debug left, right and centre. Go back to pain and paper to figure what I missed. Then I see it. These are pairs and I am counting characters up to twice too often. For the first
NN -> NC,CN transformation, I am counting the letter C twice. I am probably doing the same for all follow-up transformations.
I guess that I could only count the second character of each pair but how to know which ones to keep, which ones to skip? How can I preserve the ordering while iterating in a map? Will that be complex? Actually, it was super easy, barely an inconvenience. Using a LinkedHashMap allows iterating through the entry set while preserving the insertion order. Now I know which character to skip. When my counting map is empty I insert the first character and always insert the second character.
Time to test whether this pays off by running the first part test against the new code. It works and so does the second part. And just like day 6, it seems like the solution when counting goes exponential is to go for buckets. At least one of the possible solutions, I need to wire my brain to see these faster.
And here we are, starting the second half of the Advent of code by wrapping up this part two. That was a fun ride so far and can’t wait for the next one.