Wasteland wrap-up #69
Why nukes make poor tools for digging canals, how I digitize and extrapolate from graph data, mushroom and smoke clouds, the beheading game, and normal accidents...
This last week has just raced by over here. One notable thing, other than seeing some American friends (which is always nice), is the blockage of the Strait of Hormuz opened up an entire can of worms of “use nuclear weapons to build a new canal discourse.” It’s all just non-serious nonsense, to be sure. But aside from the very obvious practical problems with using nuclear detonations to build canals — issues with fallout, literal and political — there has also been a general sense that, well, you could do it, technically, it’s just that there are these problems involved.

A satirical, probably vibe-coded app was circulating, and my main issue with it is that the technical aspects are so dramatically wrong that it actually ends up supporting the opposite conclusion than its author desires. That is, it makes digging nuclear canals look too easy.
As I have mentioned on here before (and written up at Restricted Data in more detail), I am in the process of totally overhauling the NUKEMAP code, including the effects code, and so do enjoy the opportunities to look into such things again with fresh eyes. The current NUKEMAP “crater” function is pretty limited, and so I took this nuclear-canal discourse to update my codes (not yet on the NUKEMAP website) for cratering.
I posted a thread about it on Bluesky, which I will just link to above rather than reproduce here. The long and short of it is that nukes are very good at destroying cities but actually a lot less impressive than you might think at digging holes. Even the full-sized, 100 megaton Tsar Bomba, detonated at the optimal depth for digging a hole, can “only” dig a hole the width of Manhattan island. Which, for a weapon that can set the entire New York City metro area on fire, is pretty unimpressive.
The long and short of it is that in order to dig a canal near the Strait of Hormuz, even ignoring the fact that the area is inhabited and has a mountain range you’d need to go through, you’d need hundreds of nuclear detonations. Which is just to say, again, that separate from the political difficulties and issues with nuclear fallout, it is just not a practical idea — it is just not on the table that the US would be able to, on any short-term timescale, set up such a thing.
But I didn’t mind digging up the equations and graphs for it. I find that kind of thing very enjoyable. It combines a few different faculties that I like: I feel like I’m learning and understanding more, I enjoy transferring things from the difficult-to-use world of printed graphs and equations into the much-easier-to-use world of my new code library (which is set up to just be dramatically easy to use once you get everything into it correctly), and I like feeling like the NUKEMAP upgrade project is making progress (which it is).
I’ve been doing this with other aspects of the effects code, bit by bit, over the last few weeks. The blast overpressure code now goes down to much lower levels of pressure (0.25 psi minimum, instead of 1 psi) which, aside from just giving a bit more sense of the areas affected, also adds some more fine-grained fidelity for how casualties are be calculated. The model for initial (acute) nuclear radiation allows for much more tweaking by weapon type, as different weapons have slightly different gamma versus neutron outputs, and those can be summed up independently. The cloud size model will have many more bells and whistles to it, and be somewhat more accurate to the original data.

In the older NUKEMAP code, I tended to prefer finding polynomials for the various effects functions so that I could extend them a bit beyond the ranges of the original graphs, for example, and in the new version I tend to use the graph data verbatim for the range of yields they cover, and to only use small exponential functions, based on the data at the extreme ends of the data curves, for going beyond the “known” data. It’s a strategy that turns up to be easier to implement, as well, because finding a complex equation that fits the entire curve is much harder than just getting a simple equation for the ends of it.
I don’t know if anyone else reading this does this kind of thing, but if you do, out of all of the software options out there for tracing graph data, PlotDigitizer has become my favorite. It’s very easy to set the scales, and the “zoomed in” view that it allows makes it very easy to fine-tune the plot. It shows you the data as you do it, and you can just copy and paste it from there. There is a free and paid version; I have just used the free version so far, and it works great, but I can see some possible useful features in the paid version… who knows. I used to do my pixel digitization by just manually measuring and recording with Photoshop and Excel and, well, that’s obviously the least easy way to do it.

Using the PlotDigitizer, I can digitize a graph within a few minutes in a way that I find very reliable (I do not trust automatic digitizers, and please do not recommend just feeding them into an AI bot…), plug the data into Excel, come up with the extrapolation equations using Excel’s trend line settings (its “power” equation type usually works best for nuke effects; if I really need to do complicated curve fitting I use FindCurves.com, which will just throw every equation form in the book at data looking for a good fit), and then I have some pretty straightforward means of moving the data into my code framework. I do a quite a bit of testing with any data I import, because it is easy for errors or weirdness to creep in, and just so I make sure that I know that it is doing what it is supposed to do. It is very satisfying.
In case you missed it — which you very well may have, given that I posted it on Saturday rather than Thursday or Friday (I ran out of time) — for Doomsday Machines this week I wrote about a healthy sampling of the photographs that exist of the Hiroshima mushroom and smoke clouds:
I was inspired to do this by a photo of the Nagasaki mushroom cloud that I’ve seen circulating periodically, and some of the confusion that is caused by the difficulty of making sense of its scale. I had thought about doing a single post about both Hiroshima and Nagasaki, but as I began writing about Hiroshima I realized that it was, by itself, quite enough of a subject, so I will defer the treatment of Nagasaki for a future time! I had never quite tried to make a survey of what photographs exist of the Hiroshima clouds from below — nor tried to really make complete sense of the cloud structures visible in them (particularly trying to differentiate if they were the mushroom cloud, the mushroom stem, or the pyrocumulus smoke cloud caused by the firestorm itself, so that was a useful exercise.
For Doomsday Machines next week, I will post an interview that I did last week with my friend Benjamin Wilson, a historian of science whose new book, Strange Stability: How Cold War Scientists Set Out to Control the Arms Race and Ended Up Serving the Military-Industrial Complex (Harvard University Press, 2025), has been causing waves and controversy in the arms control community. We had a really great conversation about the history of arms control and its trickier, more hidden aspects. Don’t miss it!
Keep reading with a 7-day free trial
Subscribe to Doomsday Machines to keep reading this post and get 7 days of free access to the full post archives.


