OK, that is odd. The gen1 and gen2 use the same kernel source tree. exactly the same version. (at least as of 4.04.113) This suggests* that I only need to compile zram modules once, since both systems use arm7 cpus, and both kernels are using the same page size, on the same kernel source tree. (barring some special quirks implemented with build tools.)
However, I still cannot build working zram+pals, for levels of “working” that are greater than 1 page of memory. There is clearly something broken with zsmalloc being built as a module. (I can write on a 40k zram0 just fine, for instance. But cannot write on a 400k one without breaking the system. IIRC, this kernel uses 64k page size.) This is with a slightly modified zsmalloc that works around a non-exported [unmap_kernel_range] symbol from vmalloc, by using exported symbol [unmap_kernel_range_noflush] instead, with manual flushing the exact same way that vmalloc does flushing in [unmap_kernel_range]. (ugly hack implemented because we cannot change the vmalloc built into the running kernel.)
for clarity, our kernel’s vmalloc has this function, but does not export the symbol for it. I looked at the function, and saw how it does its flushing. I wrapped the surrogate function in appropriate calls so that it is functionally identical.
Without getting to know zsmalloc very very intimately, I am not sure how to address this. At this point, I am wishing that the maintainer of the zram+pals code had permitted some patches to allow zram to use zpool allocator backed by zbud. Instead he staunchly refused integration of proposed patches for that. zbud would give significantly less compression, but is a much less complicated allocator. It is intended for zswap, which tries to hold onto pages that would normally get swapped to disk by compressing them and holding onto them in ram until pressure forces a disk write, but zswap cannot be built as a module. (it would probably be more ideal for our nas boxes than zram backed swap, as it would give us the protection against hitting the disk, without all the overhead, but sadly requires custom kernel, because it can only be a builtin.) The maintainer seems very strict on enforcing separation there, insisting that zsmalloc is for zram, and zbud is for zswap, and nary shall the two meet. His prerogative, it is his project-- but I would rather deal with the simple allocator at this point. (zbud only allocates 2 page pairs, where zsmalloc allocates byzantine complex combinations of pages, does memory compaction, and a bunch of other things that would complicate trying to understand why multipage allocation is failing.)
Our kernel tree does not know anything about zbud or zswap, because those are both from the redhat backport I manually merged into the tree. Support for them only exists in my modified local working copy.
It is worthy to note that the “cannot allocate more than 1 page” on zram also exists on the unmodified staging version that is present in the base source tree.
This suggests that this system does memory allocations in some way that the maintainers of zram and pals do not know about.