I really enjoy the look of infrared images in black and white - the clouds have much more contrast, the effect of the apthoshere is reduced so the image is sharper, and many things absorb and reflect IR light differently. One question I have had for awhile is if a dedicated monochrome image sensor  - one with no bayer filter -  would have accepable color resolution when using AI to colorize the image. One easy way to try this out was Photoshop's newish neural filters, one of which is colorization. I am not sure what algorithm they use specifically, but I do like the fact it supports images of any resolution.

Many github repos for colorization are limited to images of a certain preset size, though DeOldify does work for images of any size if you have enough CPU/GPU memory. Using the "Artistic" model, I colorized each of the images below to compare Photoshop's algorithm.

Images were processed at their full ~100MP resolution before downscaling to 2048px for posting here, for both Photoshop and DeOldify. It used a lot of memory 😢.  Images were taken with a Phase One IQ3 100 Achromatic Digital Back, mounted on a Mamiya RZ67 with the 350mm f5.6 APO lens, plus a 850NM IR pass filter. Only the electronic shutter was used, and the lens was at F8-16 - if I remember correctly.  

Overall, I think DeOldify is actually much better than Photoshop, at least for these images. This is a slightly more difficult task than normal monochrome to RGB colorization in my opinion, because all of these images are shot with an 850NM IR Filter, so only light that is longer than 850NM in wavelength can be recorded by the image sensor. Do these images have less data for colorization algorithms utilize than a monochrome image taken across the entire visible spectrum? Or is the content and luminance of the image enough to imagine plausibly correct colors? Photoshop and Deoldify probably do not have any color IR images in their training datasets; there is also the question of how to properply map the colors from the IR band.